00:00:00.000 Started by upstream project "autotest-per-patch" build number 126231 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.101 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.152 Fetching changes from the remote Git repository 00:00:00.154 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.204 Using shallow fetch with depth 1 00:00:00.204 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.204 > git --version # timeout=10 00:00:00.247 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.276 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.276 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.706 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.720 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.733 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.733 > git config core.sparsecheckout # timeout=10 00:00:04.745 > git read-tree -mu HEAD # timeout=10 00:00:04.762 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.784 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.784 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.892 [Pipeline] Start of Pipeline 00:00:04.905 [Pipeline] library 00:00:04.906 Loading library shm_lib@master 00:00:04.907 Library shm_lib@master is cached. Copying from home. 00:00:04.920 [Pipeline] node 00:00:04.930 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:04.933 [Pipeline] { 00:00:04.942 [Pipeline] catchError 00:00:04.943 [Pipeline] { 00:00:04.955 [Pipeline] wrap 00:00:04.965 [Pipeline] { 00:00:04.973 [Pipeline] stage 00:00:04.976 [Pipeline] { (Prologue) 00:00:04.998 [Pipeline] echo 00:00:04.999 Node: VM-host-WFP1 00:00:05.005 [Pipeline] cleanWs 00:00:05.013 [WS-CLEANUP] Deleting project workspace... 00:00:05.013 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.020 [WS-CLEANUP] done 00:00:05.224 [Pipeline] setCustomBuildProperty 00:00:05.312 [Pipeline] httpRequest 00:00:05.328 [Pipeline] echo 00:00:05.330 Sorcerer 10.211.164.101 is alive 00:00:05.338 [Pipeline] httpRequest 00:00:05.342 HttpMethod: GET 00:00:05.342 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.343 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.356 Response Code: HTTP/1.1 200 OK 00:00:05.357 Success: Status code 200 is in the accepted range: 200,404 00:00:05.357 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.909 [Pipeline] sh 00:00:07.187 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.199 [Pipeline] httpRequest 00:00:07.221 [Pipeline] echo 00:00:07.222 Sorcerer 10.211.164.101 is alive 00:00:07.228 [Pipeline] httpRequest 00:00:07.231 HttpMethod: GET 00:00:07.232 URL: http://10.211.164.101/packages/spdk_cd61d4ab37909ba3c2d0adcaf2b40966f5610a12.tar.gz 00:00:07.232 Sending request to url: http://10.211.164.101/packages/spdk_cd61d4ab37909ba3c2d0adcaf2b40966f5610a12.tar.gz 00:00:07.248 Response Code: HTTP/1.1 200 OK 00:00:07.248 Success: Status code 200 is in the accepted range: 200,404 00:00:07.248 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_cd61d4ab37909ba3c2d0adcaf2b40966f5610a12.tar.gz 00:00:42.484 [Pipeline] sh 00:00:42.765 + tar --no-same-owner -xf spdk_cd61d4ab37909ba3c2d0adcaf2b40966f5610a12.tar.gz 00:00:45.309 [Pipeline] sh 00:00:45.588 + git -C spdk log --oneline -n5 00:00:45.588 cd61d4ab3 scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default 00:00:45.588 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:00:45.588 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:00:45.588 2d30d9f83 accel: introduce tasks in sequence limit 00:00:45.588 2728651ee accel: adjust task per ch define name 00:00:45.610 [Pipeline] writeFile 00:00:45.633 [Pipeline] sh 00:00:45.916 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:45.930 [Pipeline] sh 00:00:46.212 + cat autorun-spdk.conf 00:00:46.212 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.212 SPDK_TEST_NVMF=1 00:00:46.212 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.212 SPDK_TEST_USDT=1 00:00:46.212 SPDK_TEST_NVMF_MDNS=1 00:00:46.212 SPDK_RUN_UBSAN=1 00:00:46.212 NET_TYPE=virt 00:00:46.212 SPDK_JSONRPC_GO_CLIENT=1 00:00:46.212 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:46.219 RUN_NIGHTLY=0 00:00:46.224 [Pipeline] } 00:00:46.246 [Pipeline] // stage 00:00:46.265 [Pipeline] stage 00:00:46.268 [Pipeline] { (Run VM) 00:00:46.288 [Pipeline] sh 00:00:46.576 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:46.576 + echo 'Start stage prepare_nvme.sh' 00:00:46.576 Start stage prepare_nvme.sh 00:00:46.576 + [[ -n 6 ]] 00:00:46.576 + disk_prefix=ex6 00:00:46.576 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:00:46.576 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:00:46.576 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:00:46.576 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.576 ++ SPDK_TEST_NVMF=1 00:00:46.576 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.576 ++ SPDK_TEST_USDT=1 00:00:46.576 ++ SPDK_TEST_NVMF_MDNS=1 00:00:46.576 ++ SPDK_RUN_UBSAN=1 00:00:46.576 ++ NET_TYPE=virt 00:00:46.576 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:46.576 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:46.576 ++ RUN_NIGHTLY=0 00:00:46.576 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:46.576 + nvme_files=() 00:00:46.576 + declare -A nvme_files 00:00:46.576 + backend_dir=/var/lib/libvirt/images/backends 00:00:46.576 + nvme_files['nvme.img']=5G 00:00:46.576 + nvme_files['nvme-cmb.img']=5G 00:00:46.576 + nvme_files['nvme-multi0.img']=4G 00:00:46.576 + nvme_files['nvme-multi1.img']=4G 00:00:46.576 + nvme_files['nvme-multi2.img']=4G 00:00:46.576 + nvme_files['nvme-openstack.img']=8G 00:00:46.576 + nvme_files['nvme-zns.img']=5G 00:00:46.576 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:46.576 + (( SPDK_TEST_FTL == 1 )) 00:00:46.576 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:46.576 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:46.576 + for nvme in "${!nvme_files[@]}" 00:00:46.576 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:46.576 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.576 + for nvme in "${!nvme_files[@]}" 00:00:46.576 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:47.145 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.145 + for nvme in "${!nvme_files[@]}" 00:00:47.145 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:47.145 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:47.145 + for nvme in "${!nvme_files[@]}" 00:00:47.145 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:47.145 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.145 + for nvme in "${!nvme_files[@]}" 00:00:47.145 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:47.145 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.145 + for nvme in "${!nvme_files[@]}" 00:00:47.145 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:47.404 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.404 + for nvme in "${!nvme_files[@]}" 00:00:47.404 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:47.970 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.970 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:47.970 + echo 'End stage prepare_nvme.sh' 00:00:47.970 End stage prepare_nvme.sh 00:00:47.983 [Pipeline] sh 00:00:48.319 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:48.319 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora38 00:00:48.319 00:00:48.319 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:00:48.319 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:00:48.319 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:48.319 HELP=0 00:00:48.319 DRY_RUN=0 00:00:48.319 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:00:48.319 NVME_DISKS_TYPE=nvme,nvme, 00:00:48.319 NVME_AUTO_CREATE=0 00:00:48.319 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:00:48.319 NVME_CMB=,, 00:00:48.319 NVME_PMR=,, 00:00:48.319 NVME_ZNS=,, 00:00:48.319 NVME_MS=,, 00:00:48.319 NVME_FDP=,, 00:00:48.319 SPDK_VAGRANT_DISTRO=fedora38 00:00:48.319 SPDK_VAGRANT_VMCPU=10 00:00:48.319 SPDK_VAGRANT_VMRAM=12288 00:00:48.319 SPDK_VAGRANT_PROVIDER=libvirt 00:00:48.319 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:48.319 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:48.319 SPDK_OPENSTACK_NETWORK=0 00:00:48.319 VAGRANT_PACKAGE_BOX=0 00:00:48.319 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:48.319 FORCE_DISTRO=true 00:00:48.319 VAGRANT_BOX_VERSION= 00:00:48.319 EXTRA_VAGRANTFILES= 00:00:48.319 NIC_MODEL=e1000 00:00:48.319 00:00:48.320 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:00:48.320 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:50.852 Bringing machine 'default' up with 'libvirt' provider... 00:00:52.226 ==> default: Creating image (snapshot of base box volume). 00:00:52.226 ==> default: Creating domain with the following settings... 00:00:52.226 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721067613_d43b8d212f955a1a43e7 00:00:52.226 ==> default: -- Domain type: kvm 00:00:52.226 ==> default: -- Cpus: 10 00:00:52.226 ==> default: -- Feature: acpi 00:00:52.226 ==> default: -- Feature: apic 00:00:52.226 ==> default: -- Feature: pae 00:00:52.226 ==> default: -- Memory: 12288M 00:00:52.226 ==> default: -- Memory Backing: hugepages: 00:00:52.226 ==> default: -- Management MAC: 00:00:52.226 ==> default: -- Loader: 00:00:52.226 ==> default: -- Nvram: 00:00:52.226 ==> default: -- Base box: spdk/fedora38 00:00:52.226 ==> default: -- Storage pool: default 00:00:52.226 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721067613_d43b8d212f955a1a43e7.img (20G) 00:00:52.226 ==> default: -- Volume Cache: default 00:00:52.226 ==> default: -- Kernel: 00:00:52.226 ==> default: -- Initrd: 00:00:52.226 ==> default: -- Graphics Type: vnc 00:00:52.226 ==> default: -- Graphics Port: -1 00:00:52.226 ==> default: -- Graphics IP: 127.0.0.1 00:00:52.226 ==> default: -- Graphics Password: Not defined 00:00:52.226 ==> default: -- Video Type: cirrus 00:00:52.226 ==> default: -- Video VRAM: 9216 00:00:52.226 ==> default: -- Sound Type: 00:00:52.226 ==> default: -- Keymap: en-us 00:00:52.226 ==> default: -- TPM Path: 00:00:52.226 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:52.226 ==> default: -- Command line args: 00:00:52.226 ==> default: -> value=-device, 00:00:52.226 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:52.226 ==> default: -> value=-drive, 00:00:52.226 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:00:52.226 ==> default: -> value=-device, 00:00:52.226 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.226 ==> default: -> value=-device, 00:00:52.226 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:52.226 ==> default: -> value=-drive, 00:00:52.226 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:52.226 ==> default: -> value=-device, 00:00:52.226 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.226 ==> default: -> value=-drive, 00:00:52.226 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:52.226 ==> default: -> value=-device, 00:00:52.226 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.226 ==> default: -> value=-drive, 00:00:52.226 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:52.226 ==> default: -> value=-device, 00:00:52.226 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.794 ==> default: Creating shared folders metadata... 00:00:52.794 ==> default: Starting domain. 00:00:55.392 ==> default: Waiting for domain to get an IP address... 00:01:13.465 ==> default: Waiting for SSH to become available... 00:01:13.465 ==> default: Configuring and enabling network interfaces... 00:01:17.673 default: SSH address: 192.168.121.56:22 00:01:17.673 default: SSH username: vagrant 00:01:17.673 default: SSH auth method: private key 00:01:20.958 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:29.104 ==> default: Mounting SSHFS shared folder... 00:01:30.478 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:30.478 ==> default: Checking Mount.. 00:01:31.850 ==> default: Folder Successfully Mounted! 00:01:31.850 ==> default: Running provisioner: file... 00:01:33.223 default: ~/.gitconfig => .gitconfig 00:01:33.789 00:01:33.789 SUCCESS! 00:01:33.789 00:01:33.789 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:33.789 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:33.789 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:33.789 00:01:33.799 [Pipeline] } 00:01:33.819 [Pipeline] // stage 00:01:33.829 [Pipeline] dir 00:01:33.830 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:01:33.832 [Pipeline] { 00:01:33.847 [Pipeline] catchError 00:01:33.849 [Pipeline] { 00:01:33.863 [Pipeline] sh 00:01:34.166 + vagrant ssh-config --host vagrant 00:01:34.166 + sed -ne /^Host/,$p 00:01:34.166 + tee ssh_conf 00:01:37.461 Host vagrant 00:01:37.461 HostName 192.168.121.56 00:01:37.461 User vagrant 00:01:37.461 Port 22 00:01:37.461 UserKnownHostsFile /dev/null 00:01:37.461 StrictHostKeyChecking no 00:01:37.461 PasswordAuthentication no 00:01:37.461 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:37.461 IdentitiesOnly yes 00:01:37.461 LogLevel FATAL 00:01:37.461 ForwardAgent yes 00:01:37.461 ForwardX11 yes 00:01:37.461 00:01:37.475 [Pipeline] withEnv 00:01:37.478 [Pipeline] { 00:01:37.496 [Pipeline] sh 00:01:37.777 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:37.777 source /etc/os-release 00:01:37.777 [[ -e /image.version ]] && img=$(< /image.version) 00:01:37.777 # Minimal, systemd-like check. 00:01:37.777 if [[ -e /.dockerenv ]]; then 00:01:37.777 # Clear garbage from the node's name: 00:01:37.777 # agt-er_autotest_547-896 -> autotest_547-896 00:01:37.777 # $HOSTNAME is the actual container id 00:01:37.777 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:37.777 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:37.777 # We can assume this is a mount from a host where container is running, 00:01:37.777 # so fetch its hostname to easily identify the target swarm worker. 00:01:37.777 container="$(< /etc/hostname) ($agent)" 00:01:37.777 else 00:01:37.777 # Fallback 00:01:37.777 container=$agent 00:01:37.777 fi 00:01:37.777 fi 00:01:37.777 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:37.777 00:01:38.077 [Pipeline] } 00:01:38.093 [Pipeline] // withEnv 00:01:38.100 [Pipeline] setCustomBuildProperty 00:01:38.113 [Pipeline] stage 00:01:38.115 [Pipeline] { (Tests) 00:01:38.132 [Pipeline] sh 00:01:38.408 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:38.676 [Pipeline] sh 00:01:38.953 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:39.224 [Pipeline] timeout 00:01:39.225 Timeout set to expire in 40 min 00:01:39.226 [Pipeline] { 00:01:39.243 [Pipeline] sh 00:01:39.522 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:40.116 HEAD is now at cd61d4ab3 scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default 00:01:40.129 [Pipeline] sh 00:01:40.517 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:40.787 [Pipeline] sh 00:01:41.061 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:41.337 [Pipeline] sh 00:01:41.618 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:41.876 ++ readlink -f spdk_repo 00:01:41.876 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:41.876 + [[ -n /home/vagrant/spdk_repo ]] 00:01:41.876 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:41.876 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:41.876 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:41.876 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:41.876 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:41.876 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:41.876 + cd /home/vagrant/spdk_repo 00:01:41.876 + source /etc/os-release 00:01:41.876 ++ NAME='Fedora Linux' 00:01:41.876 ++ VERSION='38 (Cloud Edition)' 00:01:41.876 ++ ID=fedora 00:01:41.876 ++ VERSION_ID=38 00:01:41.876 ++ VERSION_CODENAME= 00:01:41.876 ++ PLATFORM_ID=platform:f38 00:01:41.876 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:41.876 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:41.876 ++ LOGO=fedora-logo-icon 00:01:41.876 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:41.876 ++ HOME_URL=https://fedoraproject.org/ 00:01:41.876 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:41.876 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:41.876 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:41.876 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:41.876 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:41.876 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:41.876 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:41.876 ++ SUPPORT_END=2024-05-14 00:01:41.876 ++ VARIANT='Cloud Edition' 00:01:41.876 ++ VARIANT_ID=cloud 00:01:41.876 + uname -a 00:01:41.876 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:41.876 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:42.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:42.442 Hugepages 00:01:42.442 node hugesize free / total 00:01:42.442 node0 1048576kB 0 / 0 00:01:42.442 node0 2048kB 0 / 0 00:01:42.442 00:01:42.442 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:42.442 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:42.442 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:42.442 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:42.700 + rm -f /tmp/spdk-ld-path 00:01:42.700 + source autorun-spdk.conf 00:01:42.700 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.700 ++ SPDK_TEST_NVMF=1 00:01:42.700 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.700 ++ SPDK_TEST_USDT=1 00:01:42.700 ++ SPDK_TEST_NVMF_MDNS=1 00:01:42.700 ++ SPDK_RUN_UBSAN=1 00:01:42.700 ++ NET_TYPE=virt 00:01:42.700 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:42.700 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:42.700 ++ RUN_NIGHTLY=0 00:01:42.700 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:42.700 + [[ -n '' ]] 00:01:42.700 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:42.700 + for M in /var/spdk/build-*-manifest.txt 00:01:42.700 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:42.700 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:42.700 + for M in /var/spdk/build-*-manifest.txt 00:01:42.700 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:42.700 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:42.700 ++ uname 00:01:42.700 + [[ Linux == \L\i\n\u\x ]] 00:01:42.700 + sudo dmesg -T 00:01:42.700 + sudo dmesg --clear 00:01:42.700 + dmesg_pid=5107 00:01:42.700 + sudo dmesg -Tw 00:01:42.700 + [[ Fedora Linux == FreeBSD ]] 00:01:42.700 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:42.700 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:42.700 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:42.700 + [[ -x /usr/src/fio-static/fio ]] 00:01:42.700 + export FIO_BIN=/usr/src/fio-static/fio 00:01:42.700 + FIO_BIN=/usr/src/fio-static/fio 00:01:42.700 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:42.700 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:42.700 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:42.700 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:42.700 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:42.700 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:42.700 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:42.700 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:42.700 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:42.700 Test configuration: 00:01:42.700 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.700 SPDK_TEST_NVMF=1 00:01:42.700 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.700 SPDK_TEST_USDT=1 00:01:42.700 SPDK_TEST_NVMF_MDNS=1 00:01:42.700 SPDK_RUN_UBSAN=1 00:01:42.700 NET_TYPE=virt 00:01:42.700 SPDK_JSONRPC_GO_CLIENT=1 00:01:42.700 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:42.959 RUN_NIGHTLY=0 18:21:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:42.959 18:21:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:42.959 18:21:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:42.959 18:21:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:42.959 18:21:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.959 18:21:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.959 18:21:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.959 18:21:05 -- paths/export.sh@5 -- $ export PATH 00:01:42.959 18:21:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.959 18:21:05 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:42.959 18:21:05 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:42.959 18:21:05 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721067665.XXXXXX 00:01:42.959 18:21:05 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721067665.MdGDEL 00:01:42.959 18:21:05 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:42.959 18:21:05 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:42.959 18:21:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:42.959 18:21:05 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:42.959 18:21:05 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:42.959 18:21:05 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:42.959 18:21:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:42.959 18:21:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.959 18:21:05 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:42.959 18:21:05 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:42.959 18:21:05 -- pm/common@17 -- $ local monitor 00:01:42.959 18:21:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.959 18:21:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.959 18:21:05 -- pm/common@25 -- $ sleep 1 00:01:42.959 18:21:05 -- pm/common@21 -- $ date +%s 00:01:42.959 18:21:05 -- pm/common@21 -- $ date +%s 00:01:42.959 18:21:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721067665 00:01:42.959 18:21:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721067665 00:01:42.959 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721067665_collect-vmstat.pm.log 00:01:42.959 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721067665_collect-cpu-load.pm.log 00:01:43.893 18:21:06 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:43.893 18:21:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:43.893 18:21:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:43.893 18:21:06 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:43.893 18:21:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:43.893 Mon Jul 15 06:21:06 PM UTC 2024 00:01:43.893 18:21:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:43.893 v24.09-pre-210-gcd61d4ab3 00:01:43.893 18:21:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:43.893 18:21:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:43.893 18:21:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:43.893 18:21:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:43.893 18:21:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:43.893 18:21:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.893 ************************************ 00:01:43.893 START TEST ubsan 00:01:43.893 ************************************ 00:01:43.893 using ubsan 00:01:43.893 18:21:06 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:43.893 00:01:43.893 real 0m0.001s 00:01:43.893 user 0m0.000s 00:01:43.893 sys 0m0.000s 00:01:43.893 18:21:06 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:43.893 ************************************ 00:01:43.893 END TEST ubsan 00:01:43.893 18:21:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:43.893 ************************************ 00:01:43.893 18:21:06 -- common/autotest_common.sh@1142 -- $ return 0 00:01:43.893 18:21:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:43.893 18:21:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:43.893 18:21:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:43.893 18:21:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:43.893 18:21:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:43.893 18:21:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:43.893 18:21:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:43.893 18:21:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:43.893 18:21:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:44.152 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:44.152 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:44.716 Using 'verbs' RDMA provider 00:02:03.785 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:16.013 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:16.013 go version go1.21.1 linux/amd64 00:02:16.272 Creating mk/config.mk...done. 00:02:16.272 Creating mk/cc.flags.mk...done. 00:02:16.272 Type 'make' to build. 00:02:16.272 18:21:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:16.272 18:21:38 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:16.272 18:21:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:16.272 18:21:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.272 ************************************ 00:02:16.272 START TEST make 00:02:16.272 ************************************ 00:02:16.272 18:21:38 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:16.838 make[1]: Nothing to be done for 'all'. 00:02:29.043 The Meson build system 00:02:29.043 Version: 1.3.1 00:02:29.043 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:29.043 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:29.043 Build type: native build 00:02:29.043 Program cat found: YES (/usr/bin/cat) 00:02:29.043 Project name: DPDK 00:02:29.043 Project version: 24.03.0 00:02:29.043 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:29.043 C linker for the host machine: cc ld.bfd 2.39-16 00:02:29.043 Host machine cpu family: x86_64 00:02:29.043 Host machine cpu: x86_64 00:02:29.043 Message: ## Building in Developer Mode ## 00:02:29.043 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:29.044 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:29.044 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:29.044 Program python3 found: YES (/usr/bin/python3) 00:02:29.044 Program cat found: YES (/usr/bin/cat) 00:02:29.044 Compiler for C supports arguments -march=native: YES 00:02:29.044 Checking for size of "void *" : 8 00:02:29.044 Checking for size of "void *" : 8 (cached) 00:02:29.044 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:29.044 Library m found: YES 00:02:29.044 Library numa found: YES 00:02:29.044 Has header "numaif.h" : YES 00:02:29.044 Library fdt found: NO 00:02:29.044 Library execinfo found: NO 00:02:29.044 Has header "execinfo.h" : YES 00:02:29.044 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:29.044 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:29.044 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:29.044 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:29.044 Run-time dependency openssl found: YES 3.0.9 00:02:29.044 Run-time dependency libpcap found: YES 1.10.4 00:02:29.044 Has header "pcap.h" with dependency libpcap: YES 00:02:29.044 Compiler for C supports arguments -Wcast-qual: YES 00:02:29.044 Compiler for C supports arguments -Wdeprecated: YES 00:02:29.044 Compiler for C supports arguments -Wformat: YES 00:02:29.044 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:29.044 Compiler for C supports arguments -Wformat-security: NO 00:02:29.044 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:29.044 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:29.044 Compiler for C supports arguments -Wnested-externs: YES 00:02:29.044 Compiler for C supports arguments -Wold-style-definition: YES 00:02:29.044 Compiler for C supports arguments -Wpointer-arith: YES 00:02:29.044 Compiler for C supports arguments -Wsign-compare: YES 00:02:29.044 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:29.044 Compiler for C supports arguments -Wundef: YES 00:02:29.044 Compiler for C supports arguments -Wwrite-strings: YES 00:02:29.044 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:29.044 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:29.044 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:29.044 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:29.044 Program objdump found: YES (/usr/bin/objdump) 00:02:29.044 Compiler for C supports arguments -mavx512f: YES 00:02:29.044 Checking if "AVX512 checking" compiles: YES 00:02:29.044 Fetching value of define "__SSE4_2__" : 1 00:02:29.044 Fetching value of define "__AES__" : 1 00:02:29.044 Fetching value of define "__AVX__" : 1 00:02:29.044 Fetching value of define "__AVX2__" : 1 00:02:29.044 Fetching value of define "__AVX512BW__" : 1 00:02:29.044 Fetching value of define "__AVX512CD__" : 1 00:02:29.044 Fetching value of define "__AVX512DQ__" : 1 00:02:29.044 Fetching value of define "__AVX512F__" : 1 00:02:29.044 Fetching value of define "__AVX512VL__" : 1 00:02:29.044 Fetching value of define "__PCLMUL__" : 1 00:02:29.044 Fetching value of define "__RDRND__" : 1 00:02:29.044 Fetching value of define "__RDSEED__" : 1 00:02:29.044 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:29.044 Fetching value of define "__znver1__" : (undefined) 00:02:29.044 Fetching value of define "__znver2__" : (undefined) 00:02:29.044 Fetching value of define "__znver3__" : (undefined) 00:02:29.044 Fetching value of define "__znver4__" : (undefined) 00:02:29.044 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:29.044 Message: lib/log: Defining dependency "log" 00:02:29.044 Message: lib/kvargs: Defining dependency "kvargs" 00:02:29.044 Message: lib/telemetry: Defining dependency "telemetry" 00:02:29.044 Checking for function "getentropy" : NO 00:02:29.044 Message: lib/eal: Defining dependency "eal" 00:02:29.044 Message: lib/ring: Defining dependency "ring" 00:02:29.044 Message: lib/rcu: Defining dependency "rcu" 00:02:29.044 Message: lib/mempool: Defining dependency "mempool" 00:02:29.044 Message: lib/mbuf: Defining dependency "mbuf" 00:02:29.044 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:29.044 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.044 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:29.044 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:29.044 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:29.044 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:29.044 Compiler for C supports arguments -mpclmul: YES 00:02:29.044 Compiler for C supports arguments -maes: YES 00:02:29.044 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.044 Compiler for C supports arguments -mavx512bw: YES 00:02:29.044 Compiler for C supports arguments -mavx512dq: YES 00:02:29.044 Compiler for C supports arguments -mavx512vl: YES 00:02:29.044 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:29.044 Compiler for C supports arguments -mavx2: YES 00:02:29.044 Compiler for C supports arguments -mavx: YES 00:02:29.044 Message: lib/net: Defining dependency "net" 00:02:29.044 Message: lib/meter: Defining dependency "meter" 00:02:29.044 Message: lib/ethdev: Defining dependency "ethdev" 00:02:29.044 Message: lib/pci: Defining dependency "pci" 00:02:29.044 Message: lib/cmdline: Defining dependency "cmdline" 00:02:29.044 Message: lib/hash: Defining dependency "hash" 00:02:29.044 Message: lib/timer: Defining dependency "timer" 00:02:29.044 Message: lib/compressdev: Defining dependency "compressdev" 00:02:29.044 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:29.044 Message: lib/dmadev: Defining dependency "dmadev" 00:02:29.044 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:29.044 Message: lib/power: Defining dependency "power" 00:02:29.044 Message: lib/reorder: Defining dependency "reorder" 00:02:29.044 Message: lib/security: Defining dependency "security" 00:02:29.044 Has header "linux/userfaultfd.h" : YES 00:02:29.044 Has header "linux/vduse.h" : YES 00:02:29.044 Message: lib/vhost: Defining dependency "vhost" 00:02:29.044 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:29.044 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:29.044 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:29.044 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:29.044 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:29.044 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:29.044 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:29.044 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:29.044 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:29.044 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:29.044 Program doxygen found: YES (/usr/bin/doxygen) 00:02:29.044 Configuring doxy-api-html.conf using configuration 00:02:29.044 Configuring doxy-api-man.conf using configuration 00:02:29.044 Program mandb found: YES (/usr/bin/mandb) 00:02:29.044 Program sphinx-build found: NO 00:02:29.044 Configuring rte_build_config.h using configuration 00:02:29.044 Message: 00:02:29.044 ================= 00:02:29.044 Applications Enabled 00:02:29.044 ================= 00:02:29.044 00:02:29.044 apps: 00:02:29.044 00:02:29.044 00:02:29.044 Message: 00:02:29.044 ================= 00:02:29.044 Libraries Enabled 00:02:29.044 ================= 00:02:29.044 00:02:29.044 libs: 00:02:29.044 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:29.044 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:29.044 cryptodev, dmadev, power, reorder, security, vhost, 00:02:29.044 00:02:29.044 Message: 00:02:29.044 =============== 00:02:29.044 Drivers Enabled 00:02:29.044 =============== 00:02:29.044 00:02:29.044 common: 00:02:29.044 00:02:29.044 bus: 00:02:29.044 pci, vdev, 00:02:29.044 mempool: 00:02:29.044 ring, 00:02:29.044 dma: 00:02:29.044 00:02:29.044 net: 00:02:29.044 00:02:29.044 crypto: 00:02:29.044 00:02:29.044 compress: 00:02:29.044 00:02:29.044 vdpa: 00:02:29.044 00:02:29.044 00:02:29.044 Message: 00:02:29.044 ================= 00:02:29.044 Content Skipped 00:02:29.044 ================= 00:02:29.044 00:02:29.044 apps: 00:02:29.044 dumpcap: explicitly disabled via build config 00:02:29.044 graph: explicitly disabled via build config 00:02:29.044 pdump: explicitly disabled via build config 00:02:29.044 proc-info: explicitly disabled via build config 00:02:29.044 test-acl: explicitly disabled via build config 00:02:29.044 test-bbdev: explicitly disabled via build config 00:02:29.044 test-cmdline: explicitly disabled via build config 00:02:29.044 test-compress-perf: explicitly disabled via build config 00:02:29.044 test-crypto-perf: explicitly disabled via build config 00:02:29.044 test-dma-perf: explicitly disabled via build config 00:02:29.044 test-eventdev: explicitly disabled via build config 00:02:29.044 test-fib: explicitly disabled via build config 00:02:29.044 test-flow-perf: explicitly disabled via build config 00:02:29.044 test-gpudev: explicitly disabled via build config 00:02:29.044 test-mldev: explicitly disabled via build config 00:02:29.044 test-pipeline: explicitly disabled via build config 00:02:29.044 test-pmd: explicitly disabled via build config 00:02:29.044 test-regex: explicitly disabled via build config 00:02:29.044 test-sad: explicitly disabled via build config 00:02:29.044 test-security-perf: explicitly disabled via build config 00:02:29.044 00:02:29.044 libs: 00:02:29.044 argparse: explicitly disabled via build config 00:02:29.044 metrics: explicitly disabled via build config 00:02:29.044 acl: explicitly disabled via build config 00:02:29.044 bbdev: explicitly disabled via build config 00:02:29.044 bitratestats: explicitly disabled via build config 00:02:29.044 bpf: explicitly disabled via build config 00:02:29.044 cfgfile: explicitly disabled via build config 00:02:29.044 distributor: explicitly disabled via build config 00:02:29.044 efd: explicitly disabled via build config 00:02:29.044 eventdev: explicitly disabled via build config 00:02:29.044 dispatcher: explicitly disabled via build config 00:02:29.044 gpudev: explicitly disabled via build config 00:02:29.044 gro: explicitly disabled via build config 00:02:29.044 gso: explicitly disabled via build config 00:02:29.044 ip_frag: explicitly disabled via build config 00:02:29.044 jobstats: explicitly disabled via build config 00:02:29.044 latencystats: explicitly disabled via build config 00:02:29.044 lpm: explicitly disabled via build config 00:02:29.044 member: explicitly disabled via build config 00:02:29.044 pcapng: explicitly disabled via build config 00:02:29.044 rawdev: explicitly disabled via build config 00:02:29.044 regexdev: explicitly disabled via build config 00:02:29.044 mldev: explicitly disabled via build config 00:02:29.045 rib: explicitly disabled via build config 00:02:29.045 sched: explicitly disabled via build config 00:02:29.045 stack: explicitly disabled via build config 00:02:29.045 ipsec: explicitly disabled via build config 00:02:29.045 pdcp: explicitly disabled via build config 00:02:29.045 fib: explicitly disabled via build config 00:02:29.045 port: explicitly disabled via build config 00:02:29.045 pdump: explicitly disabled via build config 00:02:29.045 table: explicitly disabled via build config 00:02:29.045 pipeline: explicitly disabled via build config 00:02:29.045 graph: explicitly disabled via build config 00:02:29.045 node: explicitly disabled via build config 00:02:29.045 00:02:29.045 drivers: 00:02:29.045 common/cpt: not in enabled drivers build config 00:02:29.045 common/dpaax: not in enabled drivers build config 00:02:29.045 common/iavf: not in enabled drivers build config 00:02:29.045 common/idpf: not in enabled drivers build config 00:02:29.045 common/ionic: not in enabled drivers build config 00:02:29.045 common/mvep: not in enabled drivers build config 00:02:29.045 common/octeontx: not in enabled drivers build config 00:02:29.045 bus/auxiliary: not in enabled drivers build config 00:02:29.045 bus/cdx: not in enabled drivers build config 00:02:29.045 bus/dpaa: not in enabled drivers build config 00:02:29.045 bus/fslmc: not in enabled drivers build config 00:02:29.045 bus/ifpga: not in enabled drivers build config 00:02:29.045 bus/platform: not in enabled drivers build config 00:02:29.045 bus/uacce: not in enabled drivers build config 00:02:29.045 bus/vmbus: not in enabled drivers build config 00:02:29.045 common/cnxk: not in enabled drivers build config 00:02:29.045 common/mlx5: not in enabled drivers build config 00:02:29.045 common/nfp: not in enabled drivers build config 00:02:29.045 common/nitrox: not in enabled drivers build config 00:02:29.045 common/qat: not in enabled drivers build config 00:02:29.045 common/sfc_efx: not in enabled drivers build config 00:02:29.045 mempool/bucket: not in enabled drivers build config 00:02:29.045 mempool/cnxk: not in enabled drivers build config 00:02:29.045 mempool/dpaa: not in enabled drivers build config 00:02:29.045 mempool/dpaa2: not in enabled drivers build config 00:02:29.045 mempool/octeontx: not in enabled drivers build config 00:02:29.045 mempool/stack: not in enabled drivers build config 00:02:29.045 dma/cnxk: not in enabled drivers build config 00:02:29.045 dma/dpaa: not in enabled drivers build config 00:02:29.045 dma/dpaa2: not in enabled drivers build config 00:02:29.045 dma/hisilicon: not in enabled drivers build config 00:02:29.045 dma/idxd: not in enabled drivers build config 00:02:29.045 dma/ioat: not in enabled drivers build config 00:02:29.045 dma/skeleton: not in enabled drivers build config 00:02:29.045 net/af_packet: not in enabled drivers build config 00:02:29.045 net/af_xdp: not in enabled drivers build config 00:02:29.045 net/ark: not in enabled drivers build config 00:02:29.045 net/atlantic: not in enabled drivers build config 00:02:29.045 net/avp: not in enabled drivers build config 00:02:29.045 net/axgbe: not in enabled drivers build config 00:02:29.045 net/bnx2x: not in enabled drivers build config 00:02:29.045 net/bnxt: not in enabled drivers build config 00:02:29.045 net/bonding: not in enabled drivers build config 00:02:29.045 net/cnxk: not in enabled drivers build config 00:02:29.045 net/cpfl: not in enabled drivers build config 00:02:29.045 net/cxgbe: not in enabled drivers build config 00:02:29.045 net/dpaa: not in enabled drivers build config 00:02:29.045 net/dpaa2: not in enabled drivers build config 00:02:29.045 net/e1000: not in enabled drivers build config 00:02:29.045 net/ena: not in enabled drivers build config 00:02:29.045 net/enetc: not in enabled drivers build config 00:02:29.045 net/enetfec: not in enabled drivers build config 00:02:29.045 net/enic: not in enabled drivers build config 00:02:29.045 net/failsafe: not in enabled drivers build config 00:02:29.045 net/fm10k: not in enabled drivers build config 00:02:29.045 net/gve: not in enabled drivers build config 00:02:29.045 net/hinic: not in enabled drivers build config 00:02:29.045 net/hns3: not in enabled drivers build config 00:02:29.045 net/i40e: not in enabled drivers build config 00:02:29.045 net/iavf: not in enabled drivers build config 00:02:29.045 net/ice: not in enabled drivers build config 00:02:29.045 net/idpf: not in enabled drivers build config 00:02:29.045 net/igc: not in enabled drivers build config 00:02:29.045 net/ionic: not in enabled drivers build config 00:02:29.045 net/ipn3ke: not in enabled drivers build config 00:02:29.045 net/ixgbe: not in enabled drivers build config 00:02:29.045 net/mana: not in enabled drivers build config 00:02:29.045 net/memif: not in enabled drivers build config 00:02:29.045 net/mlx4: not in enabled drivers build config 00:02:29.045 net/mlx5: not in enabled drivers build config 00:02:29.045 net/mvneta: not in enabled drivers build config 00:02:29.045 net/mvpp2: not in enabled drivers build config 00:02:29.045 net/netvsc: not in enabled drivers build config 00:02:29.045 net/nfb: not in enabled drivers build config 00:02:29.045 net/nfp: not in enabled drivers build config 00:02:29.045 net/ngbe: not in enabled drivers build config 00:02:29.045 net/null: not in enabled drivers build config 00:02:29.045 net/octeontx: not in enabled drivers build config 00:02:29.045 net/octeon_ep: not in enabled drivers build config 00:02:29.045 net/pcap: not in enabled drivers build config 00:02:29.045 net/pfe: not in enabled drivers build config 00:02:29.045 net/qede: not in enabled drivers build config 00:02:29.045 net/ring: not in enabled drivers build config 00:02:29.045 net/sfc: not in enabled drivers build config 00:02:29.045 net/softnic: not in enabled drivers build config 00:02:29.045 net/tap: not in enabled drivers build config 00:02:29.045 net/thunderx: not in enabled drivers build config 00:02:29.045 net/txgbe: not in enabled drivers build config 00:02:29.045 net/vdev_netvsc: not in enabled drivers build config 00:02:29.045 net/vhost: not in enabled drivers build config 00:02:29.045 net/virtio: not in enabled drivers build config 00:02:29.045 net/vmxnet3: not in enabled drivers build config 00:02:29.045 raw/*: missing internal dependency, "rawdev" 00:02:29.045 crypto/armv8: not in enabled drivers build config 00:02:29.045 crypto/bcmfs: not in enabled drivers build config 00:02:29.045 crypto/caam_jr: not in enabled drivers build config 00:02:29.045 crypto/ccp: not in enabled drivers build config 00:02:29.045 crypto/cnxk: not in enabled drivers build config 00:02:29.045 crypto/dpaa_sec: not in enabled drivers build config 00:02:29.045 crypto/dpaa2_sec: not in enabled drivers build config 00:02:29.045 crypto/ipsec_mb: not in enabled drivers build config 00:02:29.045 crypto/mlx5: not in enabled drivers build config 00:02:29.045 crypto/mvsam: not in enabled drivers build config 00:02:29.045 crypto/nitrox: not in enabled drivers build config 00:02:29.045 crypto/null: not in enabled drivers build config 00:02:29.045 crypto/octeontx: not in enabled drivers build config 00:02:29.045 crypto/openssl: not in enabled drivers build config 00:02:29.045 crypto/scheduler: not in enabled drivers build config 00:02:29.045 crypto/uadk: not in enabled drivers build config 00:02:29.045 crypto/virtio: not in enabled drivers build config 00:02:29.045 compress/isal: not in enabled drivers build config 00:02:29.045 compress/mlx5: not in enabled drivers build config 00:02:29.045 compress/nitrox: not in enabled drivers build config 00:02:29.045 compress/octeontx: not in enabled drivers build config 00:02:29.045 compress/zlib: not in enabled drivers build config 00:02:29.045 regex/*: missing internal dependency, "regexdev" 00:02:29.045 ml/*: missing internal dependency, "mldev" 00:02:29.045 vdpa/ifc: not in enabled drivers build config 00:02:29.045 vdpa/mlx5: not in enabled drivers build config 00:02:29.045 vdpa/nfp: not in enabled drivers build config 00:02:29.045 vdpa/sfc: not in enabled drivers build config 00:02:29.045 event/*: missing internal dependency, "eventdev" 00:02:29.045 baseband/*: missing internal dependency, "bbdev" 00:02:29.045 gpu/*: missing internal dependency, "gpudev" 00:02:29.045 00:02:29.045 00:02:29.045 Build targets in project: 85 00:02:29.045 00:02:29.045 DPDK 24.03.0 00:02:29.045 00:02:29.045 User defined options 00:02:29.045 buildtype : debug 00:02:29.045 default_library : shared 00:02:29.045 libdir : lib 00:02:29.045 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:29.045 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:29.045 c_link_args : 00:02:29.045 cpu_instruction_set: native 00:02:29.045 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:29.045 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:29.045 enable_docs : false 00:02:29.045 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:29.045 enable_kmods : false 00:02:29.045 max_lcores : 128 00:02:29.045 tests : false 00:02:29.045 00:02:29.045 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.045 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:29.045 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:29.045 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.045 [3/268] Linking static target lib/librte_kvargs.a 00:02:29.045 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:29.045 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.045 [6/268] Linking static target lib/librte_log.a 00:02:29.305 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:29.305 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:29.305 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:29.305 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.305 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:29.305 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.305 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.305 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:29.305 [15/268] Linking static target lib/librte_telemetry.a 00:02:29.305 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.305 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.563 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:29.821 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:29.821 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.821 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:29.821 [22/268] Linking target lib/librte_log.so.24.1 00:02:29.821 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:29.822 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:29.822 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.080 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.080 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.080 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.080 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:30.080 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:30.080 [31/268] Linking target lib/librte_kvargs.so.24.1 00:02:30.339 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.339 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.339 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:30.339 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:30.597 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:30.597 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:30.597 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:30.597 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:30.597 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:30.597 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:30.597 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:30.597 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:30.856 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:30.856 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:30.856 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:30.856 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.113 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:31.113 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:31.113 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:31.371 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:31.371 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:31.371 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:31.371 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:31.371 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:31.371 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:31.371 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:31.630 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:31.630 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:31.630 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:31.630 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:31.630 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:31.889 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:31.889 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:31.889 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:32.147 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:32.147 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:32.147 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:32.406 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:32.406 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:32.406 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:32.406 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:32.665 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:32.665 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:32.665 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:32.665 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:32.665 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:32.924 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:32.924 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:32.924 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:32.924 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:33.183 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:33.183 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:33.183 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:33.183 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:33.183 [86/268] Linking static target lib/librte_ring.a 00:02:33.183 [87/268] Linking static target lib/librte_eal.a 00:02:33.442 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:33.700 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:33.700 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:33.700 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:33.700 [92/268] Linking static target lib/librte_rcu.a 00:02:33.700 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:33.700 [94/268] Linking static target lib/librte_mempool.a 00:02:33.700 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:33.700 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.959 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:34.218 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:34.218 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:34.218 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:34.218 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:34.218 [102/268] Linking static target lib/librte_mbuf.a 00:02:34.218 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:34.477 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:34.477 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.477 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:34.735 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:34.735 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:34.735 [109/268] Linking static target lib/librte_meter.a 00:02:35.348 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:35.348 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:35.348 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:35.348 [113/268] Linking static target lib/librte_net.a 00:02:35.348 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:35.348 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.348 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.609 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:35.868 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.868 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.868 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:35.868 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:36.127 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:36.387 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:36.387 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:36.646 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:36.647 [126/268] Linking static target lib/librte_pci.a 00:02:36.647 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:36.647 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:36.647 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:36.647 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:36.905 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:36.905 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:36.905 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:36.905 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:36.905 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:36.905 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:36.905 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:36.905 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:36.905 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:37.163 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:37.163 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:37.163 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:37.163 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.163 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:37.163 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:37.163 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:37.420 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:37.421 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:37.421 [149/268] Linking static target lib/librte_cmdline.a 00:02:37.421 [150/268] Linking static target lib/librte_ethdev.a 00:02:37.421 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:37.678 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:37.678 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:37.678 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:37.678 [155/268] Linking static target lib/librte_timer.a 00:02:37.678 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:37.678 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:37.937 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:37.937 [159/268] Linking static target lib/librte_compressdev.a 00:02:38.196 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:38.196 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:38.196 [162/268] Linking static target lib/librte_hash.a 00:02:38.196 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:38.196 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:38.454 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:38.454 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.454 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:38.751 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:38.751 [169/268] Linking static target lib/librte_dmadev.a 00:02:38.751 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:38.751 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:38.751 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:38.751 [173/268] Linking static target lib/librte_cryptodev.a 00:02:38.751 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:39.009 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:39.267 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:39.267 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.267 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:39.267 [179/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.525 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.525 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:39.525 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.525 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:39.525 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.785 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:39.785 [186/268] Linking static target lib/librte_power.a 00:02:39.785 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.785 [188/268] Linking static target lib/librte_reorder.a 00:02:39.785 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.785 [190/268] Linking static target lib/librte_security.a 00:02:40.042 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:40.042 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:40.042 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.607 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.607 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.865 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.865 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.865 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:41.123 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:41.123 [200/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.123 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:41.123 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:41.381 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:41.381 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:41.640 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.640 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:41.640 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:41.640 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:41.640 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:41.897 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:41.897 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:41.897 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.897 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.897 [214/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.897 [215/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.897 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.897 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.897 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:41.897 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.897 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.897 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.897 [222/268] Linking static target drivers/librte_bus_pci.a 00:02:42.178 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:42.178 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.178 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.178 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:42.178 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.744 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.002 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:43.002 [230/268] Linking static target lib/librte_vhost.a 00:02:45.537 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.436 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.695 [233/268] Linking target lib/librte_eal.so.24.1 00:02:47.695 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:47.954 [235/268] Linking target lib/librte_ring.so.24.1 00:02:47.954 [236/268] Linking target lib/librte_pci.so.24.1 00:02:47.954 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:47.954 [238/268] Linking target lib/librte_meter.so.24.1 00:02:47.954 [239/268] Linking target lib/librte_timer.so.24.1 00:02:47.954 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:47.954 [241/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.954 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:47.954 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:47.954 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:47.954 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:47.954 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:47.954 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:47.954 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:47.954 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:48.212 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:48.212 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:48.212 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:48.212 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:48.471 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:48.471 [255/268] Linking target lib/librte_net.so.24.1 00:02:48.471 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:48.471 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:48.471 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:48.732 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:48.732 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:48.732 [261/268] Linking target lib/librte_security.so.24.1 00:02:48.732 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:48.732 [263/268] Linking target lib/librte_hash.so.24.1 00:02:48.732 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:48.732 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:48.990 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:48.990 [267/268] Linking target lib/librte_power.so.24.1 00:02:48.990 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:48.990 INFO: autodetecting backend as ninja 00:02:48.990 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:50.381 CC lib/ut_mock/mock.o 00:02:50.381 CC lib/log/log.o 00:02:50.381 CC lib/log/log_flags.o 00:02:50.381 CC lib/log/log_deprecated.o 00:02:50.381 CC lib/ut/ut.o 00:02:50.381 LIB libspdk_log.a 00:02:50.381 LIB libspdk_ut.a 00:02:50.381 LIB libspdk_ut_mock.a 00:02:50.381 SO libspdk_log.so.7.0 00:02:50.381 SO libspdk_ut.so.2.0 00:02:50.381 SO libspdk_ut_mock.so.6.0 00:02:50.639 SYMLINK libspdk_ut_mock.so 00:02:50.639 SYMLINK libspdk_ut.so 00:02:50.639 SYMLINK libspdk_log.so 00:02:50.896 CC lib/ioat/ioat.o 00:02:50.896 CC lib/util/base64.o 00:02:50.896 CC lib/util/bit_array.o 00:02:50.896 CC lib/util/cpuset.o 00:02:50.896 CC lib/util/crc16.o 00:02:50.896 CC lib/dma/dma.o 00:02:50.896 CC lib/util/crc32.o 00:02:50.896 CC lib/util/crc32c.o 00:02:50.896 CXX lib/trace_parser/trace.o 00:02:50.896 CC lib/vfio_user/host/vfio_user_pci.o 00:02:50.896 CC lib/util/crc32_ieee.o 00:02:50.896 CC lib/util/crc64.o 00:02:50.896 CC lib/util/dif.o 00:02:50.896 CC lib/util/fd.o 00:02:51.154 CC lib/util/file.o 00:02:51.154 CC lib/util/hexlify.o 00:02:51.154 LIB libspdk_dma.a 00:02:51.154 LIB libspdk_ioat.a 00:02:51.154 CC lib/util/iov.o 00:02:51.154 CC lib/vfio_user/host/vfio_user.o 00:02:51.154 SO libspdk_ioat.so.7.0 00:02:51.154 SO libspdk_dma.so.4.0 00:02:51.154 CC lib/util/math.o 00:02:51.154 CC lib/util/pipe.o 00:02:51.154 SYMLINK libspdk_ioat.so 00:02:51.154 SYMLINK libspdk_dma.so 00:02:51.154 CC lib/util/strerror_tls.o 00:02:51.154 CC lib/util/string.o 00:02:51.154 CC lib/util/uuid.o 00:02:51.154 CC lib/util/fd_group.o 00:02:51.411 CC lib/util/xor.o 00:02:51.411 LIB libspdk_vfio_user.a 00:02:51.411 CC lib/util/zipf.o 00:02:51.411 SO libspdk_vfio_user.so.5.0 00:02:51.411 SYMLINK libspdk_vfio_user.so 00:02:51.411 LIB libspdk_util.a 00:02:51.669 SO libspdk_util.so.9.1 00:02:51.669 LIB libspdk_trace_parser.a 00:02:51.669 SYMLINK libspdk_util.so 00:02:51.927 SO libspdk_trace_parser.so.5.0 00:02:51.927 SYMLINK libspdk_trace_parser.so 00:02:51.927 CC lib/vmd/led.o 00:02:51.927 CC lib/json/json_parse.o 00:02:51.927 CC lib/vmd/vmd.o 00:02:51.927 CC lib/json/json_util.o 00:02:51.927 CC lib/json/json_write.o 00:02:51.927 CC lib/rdma_utils/rdma_utils.o 00:02:51.927 CC lib/conf/conf.o 00:02:51.927 CC lib/env_dpdk/env.o 00:02:51.927 CC lib/rdma_provider/common.o 00:02:51.927 CC lib/idxd/idxd.o 00:02:52.185 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:52.185 CC lib/idxd/idxd_user.o 00:02:52.185 LIB libspdk_conf.a 00:02:52.185 CC lib/idxd/idxd_kernel.o 00:02:52.185 CC lib/env_dpdk/memory.o 00:02:52.185 LIB libspdk_rdma_utils.a 00:02:52.185 SO libspdk_conf.so.6.0 00:02:52.185 SO libspdk_rdma_utils.so.1.0 00:02:52.185 LIB libspdk_json.a 00:02:52.185 LIB libspdk_rdma_provider.a 00:02:52.443 SYMLINK libspdk_conf.so 00:02:52.443 SO libspdk_json.so.6.0 00:02:52.443 SYMLINK libspdk_rdma_utils.so 00:02:52.443 CC lib/env_dpdk/pci.o 00:02:52.443 CC lib/env_dpdk/init.o 00:02:52.443 SO libspdk_rdma_provider.so.6.0 00:02:52.443 CC lib/env_dpdk/threads.o 00:02:52.443 SYMLINK libspdk_rdma_provider.so 00:02:52.443 SYMLINK libspdk_json.so 00:02:52.443 CC lib/env_dpdk/pci_ioat.o 00:02:52.443 CC lib/env_dpdk/pci_virtio.o 00:02:52.443 CC lib/env_dpdk/pci_vmd.o 00:02:52.443 LIB libspdk_idxd.a 00:02:52.443 CC lib/env_dpdk/pci_idxd.o 00:02:52.443 CC lib/env_dpdk/pci_event.o 00:02:52.443 SO libspdk_idxd.so.12.0 00:02:52.443 CC lib/env_dpdk/sigbus_handler.o 00:02:52.701 LIB libspdk_vmd.a 00:02:52.701 SO libspdk_vmd.so.6.0 00:02:52.701 SYMLINK libspdk_idxd.so 00:02:52.701 CC lib/env_dpdk/pci_dpdk.o 00:02:52.701 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:52.701 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:52.701 SYMLINK libspdk_vmd.so 00:02:52.701 CC lib/jsonrpc/jsonrpc_server.o 00:02:52.701 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:52.701 CC lib/jsonrpc/jsonrpc_client.o 00:02:52.701 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:52.958 LIB libspdk_jsonrpc.a 00:02:52.958 SO libspdk_jsonrpc.so.6.0 00:02:53.216 SYMLINK libspdk_jsonrpc.so 00:02:53.217 LIB libspdk_env_dpdk.a 00:02:53.474 SO libspdk_env_dpdk.so.14.1 00:02:53.474 CC lib/rpc/rpc.o 00:02:53.474 SYMLINK libspdk_env_dpdk.so 00:02:53.731 LIB libspdk_rpc.a 00:02:53.731 SO libspdk_rpc.so.6.0 00:02:53.988 SYMLINK libspdk_rpc.so 00:02:54.245 CC lib/trace/trace.o 00:02:54.245 CC lib/trace/trace_rpc.o 00:02:54.245 CC lib/trace/trace_flags.o 00:02:54.245 CC lib/keyring/keyring_rpc.o 00:02:54.245 CC lib/keyring/keyring.o 00:02:54.245 CC lib/notify/notify.o 00:02:54.245 CC lib/notify/notify_rpc.o 00:02:54.502 LIB libspdk_notify.a 00:02:54.502 SO libspdk_notify.so.6.0 00:02:54.502 LIB libspdk_keyring.a 00:02:54.502 LIB libspdk_trace.a 00:02:54.502 SO libspdk_keyring.so.1.0 00:02:54.502 SO libspdk_trace.so.10.0 00:02:54.502 SYMLINK libspdk_notify.so 00:02:54.502 SYMLINK libspdk_keyring.so 00:02:54.502 SYMLINK libspdk_trace.so 00:02:55.067 CC lib/sock/sock.o 00:02:55.067 CC lib/sock/sock_rpc.o 00:02:55.067 CC lib/thread/thread.o 00:02:55.067 CC lib/thread/iobuf.o 00:02:55.336 LIB libspdk_sock.a 00:02:55.336 SO libspdk_sock.so.10.0 00:02:55.336 SYMLINK libspdk_sock.so 00:02:55.899 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:55.899 CC lib/nvme/nvme_ctrlr.o 00:02:55.899 CC lib/nvme/nvme_fabric.o 00:02:55.899 CC lib/nvme/nvme_ns.o 00:02:55.899 CC lib/nvme/nvme_ns_cmd.o 00:02:55.899 CC lib/nvme/nvme_pcie_common.o 00:02:55.899 CC lib/nvme/nvme_pcie.o 00:02:55.899 CC lib/nvme/nvme_qpair.o 00:02:55.899 CC lib/nvme/nvme.o 00:02:56.189 LIB libspdk_thread.a 00:02:56.189 SO libspdk_thread.so.10.1 00:02:56.446 SYMLINK libspdk_thread.so 00:02:56.446 CC lib/nvme/nvme_quirks.o 00:02:56.446 CC lib/nvme/nvme_transport.o 00:02:56.446 CC lib/nvme/nvme_discovery.o 00:02:56.446 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:56.446 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:56.702 CC lib/nvme/nvme_tcp.o 00:02:56.702 CC lib/nvme/nvme_opal.o 00:02:56.702 CC lib/nvme/nvme_io_msg.o 00:02:56.702 CC lib/accel/accel.o 00:02:56.702 CC lib/accel/accel_rpc.o 00:02:56.960 CC lib/accel/accel_sw.o 00:02:56.960 CC lib/nvme/nvme_poll_group.o 00:02:57.217 CC lib/nvme/nvme_zns.o 00:02:57.217 CC lib/blob/blobstore.o 00:02:57.217 CC lib/nvme/nvme_stubs.o 00:02:57.217 CC lib/init/json_config.o 00:02:57.217 CC lib/init/subsystem.o 00:02:57.217 CC lib/init/subsystem_rpc.o 00:02:57.475 CC lib/init/rpc.o 00:02:57.475 CC lib/nvme/nvme_auth.o 00:02:57.475 CC lib/nvme/nvme_cuse.o 00:02:57.475 CC lib/nvme/nvme_rdma.o 00:02:57.476 LIB libspdk_init.a 00:02:57.733 SO libspdk_init.so.5.0 00:02:57.733 LIB libspdk_accel.a 00:02:57.733 CC lib/blob/request.o 00:02:57.733 SYMLINK libspdk_init.so 00:02:57.733 SO libspdk_accel.so.15.1 00:02:57.733 CC lib/blob/zeroes.o 00:02:57.733 SYMLINK libspdk_accel.so 00:02:57.733 CC lib/blob/blob_bs_dev.o 00:02:58.015 CC lib/virtio/virtio.o 00:02:58.015 CC lib/virtio/virtio_vhost_user.o 00:02:58.015 CC lib/virtio/virtio_vfio_user.o 00:02:58.015 CC lib/event/app.o 00:02:58.015 CC lib/bdev/bdev.o 00:02:58.015 CC lib/bdev/bdev_rpc.o 00:02:58.273 CC lib/event/reactor.o 00:02:58.273 CC lib/bdev/bdev_zone.o 00:02:58.273 CC lib/virtio/virtio_pci.o 00:02:58.273 CC lib/event/log_rpc.o 00:02:58.273 CC lib/event/app_rpc.o 00:02:58.273 CC lib/event/scheduler_static.o 00:02:58.531 CC lib/bdev/part.o 00:02:58.531 CC lib/bdev/scsi_nvme.o 00:02:58.531 LIB libspdk_virtio.a 00:02:58.531 SO libspdk_virtio.so.7.0 00:02:58.531 LIB libspdk_event.a 00:02:58.790 SO libspdk_event.so.14.0 00:02:58.790 SYMLINK libspdk_virtio.so 00:02:58.790 LIB libspdk_nvme.a 00:02:58.790 SYMLINK libspdk_event.so 00:02:59.048 SO libspdk_nvme.so.13.1 00:02:59.307 SYMLINK libspdk_nvme.so 00:02:59.889 LIB libspdk_blob.a 00:02:59.889 SO libspdk_blob.so.11.0 00:02:59.889 SYMLINK libspdk_blob.so 00:03:00.455 CC lib/lvol/lvol.o 00:03:00.455 LIB libspdk_bdev.a 00:03:00.455 CC lib/blobfs/tree.o 00:03:00.455 CC lib/blobfs/blobfs.o 00:03:00.455 SO libspdk_bdev.so.15.1 00:03:00.455 SYMLINK libspdk_bdev.so 00:03:00.713 CC lib/nbd/nbd.o 00:03:00.713 CC lib/nbd/nbd_rpc.o 00:03:00.713 CC lib/ftl/ftl_core.o 00:03:00.713 CC lib/ftl/ftl_init.o 00:03:00.713 CC lib/ftl/ftl_layout.o 00:03:00.713 CC lib/scsi/dev.o 00:03:00.713 CC lib/ublk/ublk.o 00:03:00.713 CC lib/nvmf/ctrlr.o 00:03:00.971 CC lib/nvmf/ctrlr_discovery.o 00:03:00.971 CC lib/ftl/ftl_debug.o 00:03:00.971 CC lib/scsi/lun.o 00:03:00.971 CC lib/ftl/ftl_io.o 00:03:01.230 LIB libspdk_blobfs.a 00:03:01.230 CC lib/nvmf/ctrlr_bdev.o 00:03:01.230 LIB libspdk_nbd.a 00:03:01.230 SO libspdk_blobfs.so.10.0 00:03:01.230 LIB libspdk_lvol.a 00:03:01.230 SO libspdk_nbd.so.7.0 00:03:01.230 SO libspdk_lvol.so.10.0 00:03:01.230 SYMLINK libspdk_blobfs.so 00:03:01.230 CC lib/ftl/ftl_sb.o 00:03:01.230 CC lib/scsi/port.o 00:03:01.230 SYMLINK libspdk_nbd.so 00:03:01.230 CC lib/scsi/scsi.o 00:03:01.230 SYMLINK libspdk_lvol.so 00:03:01.230 CC lib/ftl/ftl_l2p.o 00:03:01.230 CC lib/ftl/ftl_l2p_flat.o 00:03:01.230 CC lib/ublk/ublk_rpc.o 00:03:01.489 CC lib/ftl/ftl_nv_cache.o 00:03:01.489 CC lib/nvmf/subsystem.o 00:03:01.489 CC lib/scsi/scsi_bdev.o 00:03:01.489 CC lib/scsi/scsi_pr.o 00:03:01.489 CC lib/scsi/scsi_rpc.o 00:03:01.489 CC lib/ftl/ftl_band.o 00:03:01.489 LIB libspdk_ublk.a 00:03:01.489 SO libspdk_ublk.so.3.0 00:03:01.489 CC lib/nvmf/nvmf.o 00:03:01.489 CC lib/nvmf/nvmf_rpc.o 00:03:01.749 SYMLINK libspdk_ublk.so 00:03:01.749 CC lib/nvmf/transport.o 00:03:01.749 CC lib/nvmf/tcp.o 00:03:01.749 CC lib/nvmf/stubs.o 00:03:01.749 CC lib/nvmf/mdns_server.o 00:03:02.008 CC lib/scsi/task.o 00:03:02.268 LIB libspdk_scsi.a 00:03:02.268 CC lib/nvmf/rdma.o 00:03:02.268 CC lib/ftl/ftl_band_ops.o 00:03:02.268 CC lib/nvmf/auth.o 00:03:02.268 SO libspdk_scsi.so.9.0 00:03:02.268 CC lib/ftl/ftl_rq.o 00:03:02.268 CC lib/ftl/ftl_writer.o 00:03:02.268 CC lib/ftl/ftl_reloc.o 00:03:02.526 SYMLINK libspdk_scsi.so 00:03:02.526 CC lib/ftl/ftl_l2p_cache.o 00:03:02.526 CC lib/ftl/ftl_p2l.o 00:03:02.526 CC lib/iscsi/conn.o 00:03:02.526 CC lib/iscsi/init_grp.o 00:03:02.785 CC lib/iscsi/iscsi.o 00:03:02.785 CC lib/vhost/vhost.o 00:03:02.785 CC lib/vhost/vhost_rpc.o 00:03:03.044 CC lib/vhost/vhost_scsi.o 00:03:03.044 CC lib/ftl/mngt/ftl_mngt.o 00:03:03.044 CC lib/iscsi/md5.o 00:03:03.044 CC lib/iscsi/param.o 00:03:03.303 CC lib/iscsi/portal_grp.o 00:03:03.303 CC lib/vhost/vhost_blk.o 00:03:03.303 CC lib/vhost/rte_vhost_user.o 00:03:03.303 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:03.303 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:03.562 CC lib/iscsi/tgt_node.o 00:03:03.562 CC lib/iscsi/iscsi_subsystem.o 00:03:03.562 CC lib/iscsi/iscsi_rpc.o 00:03:03.562 CC lib/iscsi/task.o 00:03:03.562 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:03.820 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:03.820 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:03.820 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:03.820 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:03.820 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:04.079 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:04.079 LIB libspdk_iscsi.a 00:03:04.079 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:04.079 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:04.079 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:04.079 CC lib/ftl/utils/ftl_conf.o 00:03:04.079 CC lib/ftl/utils/ftl_md.o 00:03:04.079 SO libspdk_iscsi.so.8.0 00:03:04.338 LIB libspdk_vhost.a 00:03:04.338 LIB libspdk_nvmf.a 00:03:04.338 CC lib/ftl/utils/ftl_mempool.o 00:03:04.338 CC lib/ftl/utils/ftl_bitmap.o 00:03:04.338 SO libspdk_vhost.so.8.0 00:03:04.338 CC lib/ftl/utils/ftl_property.o 00:03:04.338 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:04.338 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:04.338 SYMLINK libspdk_iscsi.so 00:03:04.338 SO libspdk_nvmf.so.19.0 00:03:04.338 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:04.338 SYMLINK libspdk_vhost.so 00:03:04.597 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:04.597 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:04.597 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:04.597 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:04.597 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:04.597 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:04.597 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:04.597 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.597 SYMLINK libspdk_nvmf.so 00:03:04.597 CC lib/ftl/base/ftl_base_dev.o 00:03:04.597 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.597 CC lib/ftl/ftl_trace.o 00:03:04.855 LIB libspdk_ftl.a 00:03:05.113 SO libspdk_ftl.so.9.0 00:03:05.691 SYMLINK libspdk_ftl.so 00:03:06.256 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.256 CC module/accel/ioat/accel_ioat.o 00:03:06.256 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.256 CC module/sock/posix/posix.o 00:03:06.256 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:06.256 CC module/blob/bdev/blob_bdev.o 00:03:06.256 CC module/scheduler/gscheduler/gscheduler.o 00:03:06.256 CC module/accel/dsa/accel_dsa.o 00:03:06.256 CC module/keyring/file/keyring.o 00:03:06.256 CC module/accel/error/accel_error.o 00:03:06.257 LIB libspdk_env_dpdk_rpc.a 00:03:06.257 SO libspdk_env_dpdk_rpc.so.6.0 00:03:06.257 LIB libspdk_scheduler_gscheduler.a 00:03:06.257 LIB libspdk_scheduler_dpdk_governor.a 00:03:06.257 CC module/keyring/file/keyring_rpc.o 00:03:06.257 SO libspdk_scheduler_gscheduler.so.4.0 00:03:06.257 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:06.257 CC module/accel/error/accel_error_rpc.o 00:03:06.257 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.257 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.257 LIB libspdk_scheduler_dynamic.a 00:03:06.257 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.514 SYMLINK libspdk_scheduler_gscheduler.so 00:03:06.514 SO libspdk_scheduler_dynamic.so.4.0 00:03:06.514 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:06.514 LIB libspdk_blob_bdev.a 00:03:06.514 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.514 SO libspdk_blob_bdev.so.11.0 00:03:06.515 LIB libspdk_keyring_file.a 00:03:06.515 LIB libspdk_accel_ioat.a 00:03:06.515 LIB libspdk_accel_error.a 00:03:06.515 LIB libspdk_accel_dsa.a 00:03:06.515 SO libspdk_keyring_file.so.1.0 00:03:06.515 SYMLINK libspdk_blob_bdev.so 00:03:06.515 SO libspdk_accel_ioat.so.6.0 00:03:06.515 SO libspdk_accel_error.so.2.0 00:03:06.515 SO libspdk_accel_dsa.so.5.0 00:03:06.515 SYMLINK libspdk_keyring_file.so 00:03:06.515 SYMLINK libspdk_accel_error.so 00:03:06.515 SYMLINK libspdk_accel_ioat.so 00:03:06.515 CC module/keyring/linux/keyring.o 00:03:06.515 CC module/keyring/linux/keyring_rpc.o 00:03:06.515 CC module/accel/iaa/accel_iaa.o 00:03:06.515 SYMLINK libspdk_accel_dsa.so 00:03:06.515 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.772 LIB libspdk_keyring_linux.a 00:03:06.772 SO libspdk_keyring_linux.so.1.0 00:03:06.772 CC module/blobfs/bdev/blobfs_bdev.o 00:03:06.772 CC module/bdev/delay/vbdev_delay.o 00:03:06.772 LIB libspdk_accel_iaa.a 00:03:06.772 CC module/bdev/lvol/vbdev_lvol.o 00:03:06.772 CC module/bdev/error/vbdev_error.o 00:03:06.772 CC module/bdev/gpt/gpt.o 00:03:06.772 SYMLINK libspdk_keyring_linux.so 00:03:06.772 CC module/bdev/error/vbdev_error_rpc.o 00:03:06.772 LIB libspdk_sock_posix.a 00:03:07.031 SO libspdk_accel_iaa.so.3.0 00:03:07.031 SO libspdk_sock_posix.so.6.0 00:03:07.031 CC module/bdev/malloc/bdev_malloc.o 00:03:07.031 CC module/bdev/null/bdev_null.o 00:03:07.031 SYMLINK libspdk_accel_iaa.so 00:03:07.031 CC module/bdev/null/bdev_null_rpc.o 00:03:07.031 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.031 SYMLINK libspdk_sock_posix.so 00:03:07.031 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.031 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.289 LIB libspdk_bdev_error.a 00:03:07.289 SO libspdk_bdev_error.so.6.0 00:03:07.289 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:07.289 LIB libspdk_bdev_null.a 00:03:07.289 SYMLINK libspdk_bdev_error.so 00:03:07.289 LIB libspdk_blobfs_bdev.a 00:03:07.289 SO libspdk_bdev_null.so.6.0 00:03:07.289 LIB libspdk_bdev_gpt.a 00:03:07.289 SO libspdk_blobfs_bdev.so.6.0 00:03:07.289 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.289 SO libspdk_bdev_gpt.so.6.0 00:03:07.549 CC module/bdev/nvme/bdev_nvme.o 00:03:07.549 SYMLINK libspdk_bdev_null.so 00:03:07.549 SYMLINK libspdk_blobfs_bdev.so 00:03:07.549 LIB libspdk_bdev_delay.a 00:03:07.549 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.549 SO libspdk_bdev_delay.so.6.0 00:03:07.549 SYMLINK libspdk_bdev_gpt.so 00:03:07.549 CC module/bdev/raid/bdev_raid.o 00:03:07.549 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.549 CC module/bdev/split/vbdev_split.o 00:03:07.549 SYMLINK libspdk_bdev_delay.so 00:03:07.549 CC module/bdev/split/vbdev_split_rpc.o 00:03:07.549 LIB libspdk_bdev_malloc.a 00:03:07.549 LIB libspdk_bdev_lvol.a 00:03:07.549 SO libspdk_bdev_malloc.so.6.0 00:03:07.549 SO libspdk_bdev_lvol.so.6.0 00:03:07.549 CC module/bdev/aio/bdev_aio.o 00:03:07.549 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:07.807 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.807 SYMLINK libspdk_bdev_malloc.so 00:03:07.807 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.807 SYMLINK libspdk_bdev_lvol.so 00:03:07.807 LIB libspdk_bdev_passthru.a 00:03:07.807 SO libspdk_bdev_passthru.so.6.0 00:03:07.807 LIB libspdk_bdev_split.a 00:03:07.807 SO libspdk_bdev_split.so.6.0 00:03:07.807 SYMLINK libspdk_bdev_passthru.so 00:03:07.807 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.807 CC module/bdev/nvme/nvme_rpc.o 00:03:07.807 SYMLINK libspdk_bdev_split.so 00:03:07.807 CC module/bdev/ftl/bdev_ftl.o 00:03:07.807 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.807 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.807 CC module/bdev/nvme/bdev_mdns_client.o 00:03:08.066 LIB libspdk_bdev_zone_block.a 00:03:08.066 CC module/bdev/aio/bdev_aio_rpc.o 00:03:08.066 SO libspdk_bdev_zone_block.so.6.0 00:03:08.066 SYMLINK libspdk_bdev_zone_block.so 00:03:08.066 CC module/bdev/nvme/vbdev_opal.o 00:03:08.066 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.066 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.066 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.325 LIB libspdk_bdev_aio.a 00:03:08.325 LIB libspdk_bdev_ftl.a 00:03:08.325 CC module/bdev/raid/bdev_raid_sb.o 00:03:08.325 SO libspdk_bdev_aio.so.6.0 00:03:08.325 SO libspdk_bdev_ftl.so.6.0 00:03:08.325 CC module/bdev/raid/raid0.o 00:03:08.325 LIB libspdk_bdev_iscsi.a 00:03:08.325 SYMLINK libspdk_bdev_aio.so 00:03:08.325 CC module/bdev/raid/raid1.o 00:03:08.325 CC module/bdev/raid/concat.o 00:03:08.325 SO libspdk_bdev_iscsi.so.6.0 00:03:08.325 SYMLINK libspdk_bdev_ftl.so 00:03:08.583 SYMLINK libspdk_bdev_iscsi.so 00:03:08.583 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.583 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.583 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.841 LIB libspdk_bdev_raid.a 00:03:08.841 SO libspdk_bdev_raid.so.6.0 00:03:08.841 SYMLINK libspdk_bdev_raid.so 00:03:09.409 LIB libspdk_bdev_virtio.a 00:03:09.409 SO libspdk_bdev_virtio.so.6.0 00:03:09.409 SYMLINK libspdk_bdev_virtio.so 00:03:09.409 LIB libspdk_bdev_nvme.a 00:03:09.667 SO libspdk_bdev_nvme.so.7.0 00:03:09.667 SYMLINK libspdk_bdev_nvme.so 00:03:10.234 CC module/event/subsystems/keyring/keyring.o 00:03:10.234 CC module/event/subsystems/scheduler/scheduler.o 00:03:10.234 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:10.234 CC module/event/subsystems/sock/sock.o 00:03:10.234 CC module/event/subsystems/iobuf/iobuf.o 00:03:10.234 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:10.493 CC module/event/subsystems/vmd/vmd.o 00:03:10.493 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:10.493 LIB libspdk_event_scheduler.a 00:03:10.493 LIB libspdk_event_keyring.a 00:03:10.493 LIB libspdk_event_sock.a 00:03:10.493 SO libspdk_event_scheduler.so.4.0 00:03:10.493 SO libspdk_event_keyring.so.1.0 00:03:10.493 LIB libspdk_event_vmd.a 00:03:10.493 LIB libspdk_event_vhost_blk.a 00:03:10.493 SO libspdk_event_sock.so.5.0 00:03:10.493 LIB libspdk_event_iobuf.a 00:03:10.493 SYMLINK libspdk_event_scheduler.so 00:03:10.493 SYMLINK libspdk_event_keyring.so 00:03:10.493 SO libspdk_event_vhost_blk.so.3.0 00:03:10.493 SO libspdk_event_vmd.so.6.0 00:03:10.493 SYMLINK libspdk_event_sock.so 00:03:10.493 SO libspdk_event_iobuf.so.3.0 00:03:10.752 SYMLINK libspdk_event_vhost_blk.so 00:03:10.752 SYMLINK libspdk_event_vmd.so 00:03:10.752 SYMLINK libspdk_event_iobuf.so 00:03:11.010 CC module/event/subsystems/accel/accel.o 00:03:11.269 LIB libspdk_event_accel.a 00:03:11.269 SO libspdk_event_accel.so.6.0 00:03:11.269 SYMLINK libspdk_event_accel.so 00:03:11.837 CC module/event/subsystems/bdev/bdev.o 00:03:11.837 LIB libspdk_event_bdev.a 00:03:11.837 SO libspdk_event_bdev.so.6.0 00:03:12.096 SYMLINK libspdk_event_bdev.so 00:03:12.354 CC module/event/subsystems/nbd/nbd.o 00:03:12.354 CC module/event/subsystems/scsi/scsi.o 00:03:12.354 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.354 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.354 CC module/event/subsystems/ublk/ublk.o 00:03:12.614 LIB libspdk_event_nbd.a 00:03:12.614 LIB libspdk_event_scsi.a 00:03:12.614 SO libspdk_event_nbd.so.6.0 00:03:12.614 LIB libspdk_event_ublk.a 00:03:12.614 SO libspdk_event_scsi.so.6.0 00:03:12.614 SYMLINK libspdk_event_nbd.so 00:03:12.614 SO libspdk_event_ublk.so.3.0 00:03:12.614 LIB libspdk_event_nvmf.a 00:03:12.614 SYMLINK libspdk_event_scsi.so 00:03:12.614 SO libspdk_event_nvmf.so.6.0 00:03:12.614 SYMLINK libspdk_event_ublk.so 00:03:12.872 SYMLINK libspdk_event_nvmf.so 00:03:13.129 CC module/event/subsystems/iscsi/iscsi.o 00:03:13.129 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:13.129 LIB libspdk_event_iscsi.a 00:03:13.129 LIB libspdk_event_vhost_scsi.a 00:03:13.387 SO libspdk_event_iscsi.so.6.0 00:03:13.387 SO libspdk_event_vhost_scsi.so.3.0 00:03:13.387 SYMLINK libspdk_event_iscsi.so 00:03:13.387 SYMLINK libspdk_event_vhost_scsi.so 00:03:13.645 SO libspdk.so.6.0 00:03:13.645 SYMLINK libspdk.so 00:03:13.903 CXX app/trace/trace.o 00:03:13.904 CC app/spdk_lspci/spdk_lspci.o 00:03:13.904 CC app/trace_record/trace_record.o 00:03:13.904 CC app/spdk_nvme_perf/perf.o 00:03:13.904 CC app/iscsi_tgt/iscsi_tgt.o 00:03:13.904 CC app/nvmf_tgt/nvmf_main.o 00:03:13.904 CC app/spdk_tgt/spdk_tgt.o 00:03:13.904 CC examples/ioat/perf/perf.o 00:03:13.904 CC test/thread/poller_perf/poller_perf.o 00:03:13.904 LINK spdk_lspci 00:03:13.904 CC examples/util/zipf/zipf.o 00:03:14.177 LINK nvmf_tgt 00:03:14.177 LINK spdk_trace_record 00:03:14.177 LINK poller_perf 00:03:14.177 LINK iscsi_tgt 00:03:14.177 LINK spdk_tgt 00:03:14.177 LINK zipf 00:03:14.177 LINK ioat_perf 00:03:14.177 LINK spdk_trace 00:03:14.434 CC app/spdk_nvme_identify/identify.o 00:03:14.434 CC examples/ioat/verify/verify.o 00:03:14.434 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.434 TEST_HEADER include/spdk/accel.h 00:03:14.434 TEST_HEADER include/spdk/accel_module.h 00:03:14.434 TEST_HEADER include/spdk/assert.h 00:03:14.434 TEST_HEADER include/spdk/barrier.h 00:03:14.434 TEST_HEADER include/spdk/base64.h 00:03:14.434 TEST_HEADER include/spdk/bdev.h 00:03:14.434 TEST_HEADER include/spdk/bdev_module.h 00:03:14.434 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.692 TEST_HEADER include/spdk/bit_array.h 00:03:14.692 TEST_HEADER include/spdk/bit_pool.h 00:03:14.692 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.692 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.692 TEST_HEADER include/spdk/blobfs.h 00:03:14.692 TEST_HEADER include/spdk/blob.h 00:03:14.692 TEST_HEADER include/spdk/conf.h 00:03:14.692 TEST_HEADER include/spdk/config.h 00:03:14.692 TEST_HEADER include/spdk/cpuset.h 00:03:14.692 TEST_HEADER include/spdk/crc16.h 00:03:14.692 TEST_HEADER include/spdk/crc32.h 00:03:14.692 TEST_HEADER include/spdk/crc64.h 00:03:14.692 CC test/dma/test_dma/test_dma.o 00:03:14.692 TEST_HEADER include/spdk/dif.h 00:03:14.692 TEST_HEADER include/spdk/dma.h 00:03:14.692 TEST_HEADER include/spdk/endian.h 00:03:14.692 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.692 TEST_HEADER include/spdk/env.h 00:03:14.692 TEST_HEADER include/spdk/event.h 00:03:14.692 TEST_HEADER include/spdk/fd_group.h 00:03:14.692 TEST_HEADER include/spdk/fd.h 00:03:14.692 TEST_HEADER include/spdk/file.h 00:03:14.692 TEST_HEADER include/spdk/ftl.h 00:03:14.692 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.692 CC examples/thread/thread/thread_ex.o 00:03:14.692 TEST_HEADER include/spdk/hexlify.h 00:03:14.692 TEST_HEADER include/spdk/histogram_data.h 00:03:14.692 TEST_HEADER include/spdk/idxd.h 00:03:14.692 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.693 TEST_HEADER include/spdk/init.h 00:03:14.693 TEST_HEADER include/spdk/ioat.h 00:03:14.693 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.693 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.693 LINK spdk_nvme_perf 00:03:14.693 TEST_HEADER include/spdk/json.h 00:03:14.693 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.693 TEST_HEADER include/spdk/keyring.h 00:03:14.693 LINK verify 00:03:14.693 TEST_HEADER include/spdk/keyring_module.h 00:03:14.693 TEST_HEADER include/spdk/likely.h 00:03:14.693 TEST_HEADER include/spdk/log.h 00:03:14.693 TEST_HEADER include/spdk/lvol.h 00:03:14.693 CC test/app/bdev_svc/bdev_svc.o 00:03:14.693 TEST_HEADER include/spdk/memory.h 00:03:14.693 TEST_HEADER include/spdk/mmio.h 00:03:14.693 TEST_HEADER include/spdk/nbd.h 00:03:14.693 TEST_HEADER include/spdk/notify.h 00:03:14.693 TEST_HEADER include/spdk/nvme.h 00:03:14.693 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.693 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.693 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.693 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.693 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.693 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.693 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.693 TEST_HEADER include/spdk/nvmf.h 00:03:14.693 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.693 LINK interrupt_tgt 00:03:14.693 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.693 TEST_HEADER include/spdk/opal.h 00:03:14.693 TEST_HEADER include/spdk/opal_spec.h 00:03:14.693 TEST_HEADER include/spdk/pci_ids.h 00:03:14.693 TEST_HEADER include/spdk/pipe.h 00:03:14.693 TEST_HEADER include/spdk/queue.h 00:03:14.693 TEST_HEADER include/spdk/reduce.h 00:03:14.693 TEST_HEADER include/spdk/rpc.h 00:03:14.693 TEST_HEADER include/spdk/scheduler.h 00:03:14.693 TEST_HEADER include/spdk/scsi.h 00:03:14.693 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.693 TEST_HEADER include/spdk/sock.h 00:03:14.693 TEST_HEADER include/spdk/stdinc.h 00:03:14.693 TEST_HEADER include/spdk/string.h 00:03:14.693 TEST_HEADER include/spdk/thread.h 00:03:14.693 TEST_HEADER include/spdk/trace.h 00:03:14.693 TEST_HEADER include/spdk/trace_parser.h 00:03:14.693 TEST_HEADER include/spdk/tree.h 00:03:14.693 TEST_HEADER include/spdk/ublk.h 00:03:14.693 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.693 TEST_HEADER include/spdk/util.h 00:03:14.693 TEST_HEADER include/spdk/uuid.h 00:03:14.693 TEST_HEADER include/spdk/version.h 00:03:14.693 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.693 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.693 TEST_HEADER include/spdk/vhost.h 00:03:14.693 TEST_HEADER include/spdk/vmd.h 00:03:14.693 TEST_HEADER include/spdk/xor.h 00:03:14.693 TEST_HEADER include/spdk/zipf.h 00:03:14.693 CXX test/cpp_headers/accel.o 00:03:14.951 LINK bdev_svc 00:03:14.951 LINK thread 00:03:14.951 LINK test_dma 00:03:14.951 CXX test/cpp_headers/accel_module.o 00:03:14.951 CC examples/sock/hello_world/hello_sock.o 00:03:14.951 CC test/app/histogram_perf/histogram_perf.o 00:03:14.951 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.208 LINK spdk_nvme_identify 00:03:15.208 CXX test/cpp_headers/assert.o 00:03:15.208 CC test/app/jsoncat/jsoncat.o 00:03:15.208 LINK histogram_perf 00:03:15.208 LINK mem_callbacks 00:03:15.208 LINK hello_sock 00:03:15.208 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.208 CXX test/cpp_headers/barrier.o 00:03:15.208 LINK jsoncat 00:03:15.466 CC test/app/stub/stub.o 00:03:15.466 CC app/spdk_nvme_discover/discovery_aer.o 00:03:15.466 LINK lsvmd 00:03:15.466 LINK nvme_fuzz 00:03:15.466 CXX test/cpp_headers/base64.o 00:03:15.466 LINK stub 00:03:15.724 CXX test/cpp_headers/bdev.o 00:03:15.724 CC test/env/vtophys/vtophys.o 00:03:15.724 CC examples/idxd/perf/perf.o 00:03:15.724 CC app/spdk_top/spdk_top.o 00:03:15.724 LINK spdk_nvme_discover 00:03:15.724 CXX test/cpp_headers/bdev_module.o 00:03:15.984 LINK vtophys 00:03:15.984 CC app/vhost/vhost.o 00:03:15.984 CC examples/vmd/led/led.o 00:03:15.984 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.984 CXX test/cpp_headers/bdev_zone.o 00:03:15.984 LINK idxd_perf 00:03:15.984 CC test/event/event_perf/event_perf.o 00:03:15.984 LINK vhost 00:03:15.984 CC test/event/reactor/reactor.o 00:03:16.243 LINK led 00:03:16.243 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:16.243 CXX test/cpp_headers/bit_array.o 00:03:16.243 LINK reactor 00:03:16.243 LINK event_perf 00:03:16.243 LINK env_dpdk_post_init 00:03:16.243 CC test/event/reactor_perf/reactor_perf.o 00:03:16.502 CXX test/cpp_headers/bit_pool.o 00:03:16.502 CXX test/cpp_headers/blob_bdev.o 00:03:16.502 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:16.502 LINK spdk_top 00:03:16.502 LINK reactor_perf 00:03:16.502 CC test/rpc_client/rpc_client_test.o 00:03:16.502 CXX test/cpp_headers/blobfs_bdev.o 00:03:16.502 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.770 CC test/env/memory/memory_ut.o 00:03:16.770 CC test/nvme/aer/aer.o 00:03:16.770 LINK rpc_client_test 00:03:16.771 CC test/event/app_repeat/app_repeat.o 00:03:16.771 CXX test/cpp_headers/blobfs.o 00:03:16.771 CC test/accel/dif/dif.o 00:03:16.771 CC app/spdk_dd/spdk_dd.o 00:03:17.029 LINK aer 00:03:17.029 LINK vhost_fuzz 00:03:17.029 LINK app_repeat 00:03:17.029 CXX test/cpp_headers/blob.o 00:03:17.287 CC test/blobfs/mkfs/mkfs.o 00:03:17.287 LINK spdk_dd 00:03:17.287 LINK dif 00:03:17.287 CXX test/cpp_headers/conf.o 00:03:17.287 CC test/nvme/reset/reset.o 00:03:17.287 CC test/env/pci/pci_ut.o 00:03:17.545 CC test/event/scheduler/scheduler.o 00:03:17.545 LINK mkfs 00:03:17.545 CXX test/cpp_headers/config.o 00:03:17.545 CXX test/cpp_headers/cpuset.o 00:03:17.545 LINK reset 00:03:17.545 LINK memory_ut 00:03:17.803 LINK scheduler 00:03:17.803 LINK iscsi_fuzz 00:03:17.803 CXX test/cpp_headers/crc16.o 00:03:17.803 CC app/fio/nvme/fio_plugin.o 00:03:17.803 LINK pci_ut 00:03:17.803 CC app/fio/bdev/fio_plugin.o 00:03:17.803 CC test/nvme/sgl/sgl.o 00:03:17.803 CC test/lvol/esnap/esnap.o 00:03:17.803 CXX test/cpp_headers/crc32.o 00:03:18.061 CXX test/cpp_headers/crc64.o 00:03:18.061 CXX test/cpp_headers/dif.o 00:03:18.320 LINK sgl 00:03:18.320 CC test/nvme/e2edp/nvme_dp.o 00:03:18.320 CXX test/cpp_headers/dma.o 00:03:18.320 CC examples/accel/perf/accel_perf.o 00:03:18.320 LINK spdk_bdev 00:03:18.579 CC examples/blob/hello_world/hello_blob.o 00:03:18.579 LINK spdk_nvme 00:03:18.579 CXX test/cpp_headers/endian.o 00:03:18.579 CXX test/cpp_headers/env_dpdk.o 00:03:18.579 CXX test/cpp_headers/env.o 00:03:18.579 CC examples/blob/cli/blobcli.o 00:03:18.579 CC examples/nvme/hello_world/hello_world.o 00:03:18.579 LINK nvme_dp 00:03:18.838 LINK hello_blob 00:03:18.839 CXX test/cpp_headers/event.o 00:03:18.839 LINK accel_perf 00:03:19.097 CC test/nvme/overhead/overhead.o 00:03:19.097 CXX test/cpp_headers/fd_group.o 00:03:19.097 LINK hello_world 00:03:19.097 CC test/bdev/bdevio/bdevio.o 00:03:19.097 CXX test/cpp_headers/fd.o 00:03:19.097 CXX test/cpp_headers/file.o 00:03:19.097 CC examples/nvme/reconnect/reconnect.o 00:03:19.355 LINK blobcli 00:03:19.355 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.355 CXX test/cpp_headers/ftl.o 00:03:19.355 LINK overhead 00:03:19.355 LINK bdevio 00:03:19.613 CXX test/cpp_headers/gpt_spec.o 00:03:19.613 CXX test/cpp_headers/hexlify.o 00:03:19.613 CC test/nvme/err_injection/err_injection.o 00:03:19.613 LINK reconnect 00:03:19.613 CC examples/nvme/arbitration/arbitration.o 00:03:19.613 CC examples/bdev/hello_world/hello_bdev.o 00:03:19.613 CXX test/cpp_headers/histogram_data.o 00:03:19.613 LINK err_injection 00:03:19.873 LINK nvme_manage 00:03:19.873 CC examples/bdev/bdevperf/bdevperf.o 00:03:19.873 CXX test/cpp_headers/idxd.o 00:03:19.873 LINK hello_bdev 00:03:19.873 CC examples/nvme/hotplug/hotplug.o 00:03:19.873 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:19.873 LINK arbitration 00:03:20.132 CXX test/cpp_headers/idxd_spec.o 00:03:20.132 CC test/nvme/startup/startup.o 00:03:20.132 CC examples/nvme/abort/abort.o 00:03:20.132 LINK cmb_copy 00:03:20.132 LINK hotplug 00:03:20.132 CXX test/cpp_headers/init.o 00:03:20.132 LINK startup 00:03:20.391 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.391 CC test/nvme/reserve/reserve.o 00:03:20.391 CXX test/cpp_headers/ioat.o 00:03:20.391 CC test/nvme/simple_copy/simple_copy.o 00:03:20.391 LINK abort 00:03:20.391 LINK pmr_persistence 00:03:20.391 CC test/nvme/connect_stress/connect_stress.o 00:03:20.391 LINK bdevperf 00:03:20.649 CC test/nvme/boot_partition/boot_partition.o 00:03:20.649 LINK reserve 00:03:20.649 CXX test/cpp_headers/ioat_spec.o 00:03:20.649 LINK simple_copy 00:03:20.649 LINK connect_stress 00:03:20.649 LINK boot_partition 00:03:20.649 CC test/nvme/compliance/nvme_compliance.o 00:03:20.649 CXX test/cpp_headers/iscsi_spec.o 00:03:20.906 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.906 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.906 CXX test/cpp_headers/json.o 00:03:20.906 CXX test/cpp_headers/jsonrpc.o 00:03:20.906 CC test/nvme/fdp/fdp.o 00:03:20.906 CC test/nvme/cuse/cuse.o 00:03:21.163 CC examples/nvmf/nvmf/nvmf.o 00:03:21.163 LINK nvme_compliance 00:03:21.163 LINK fused_ordering 00:03:21.163 LINK doorbell_aers 00:03:21.163 CXX test/cpp_headers/keyring.o 00:03:21.163 CXX test/cpp_headers/keyring_module.o 00:03:21.163 CXX test/cpp_headers/likely.o 00:03:21.163 CXX test/cpp_headers/log.o 00:03:21.163 CXX test/cpp_headers/lvol.o 00:03:21.163 CXX test/cpp_headers/memory.o 00:03:21.422 CXX test/cpp_headers/mmio.o 00:03:21.422 LINK fdp 00:03:21.422 CXX test/cpp_headers/nbd.o 00:03:21.422 LINK nvmf 00:03:21.422 CXX test/cpp_headers/notify.o 00:03:21.422 CXX test/cpp_headers/nvme.o 00:03:21.422 CXX test/cpp_headers/nvme_intel.o 00:03:21.422 CXX test/cpp_headers/nvme_ocssd.o 00:03:21.422 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:21.422 CXX test/cpp_headers/nvme_spec.o 00:03:21.422 CXX test/cpp_headers/nvme_zns.o 00:03:21.422 CXX test/cpp_headers/nvmf_cmd.o 00:03:21.681 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:21.681 CXX test/cpp_headers/nvmf.o 00:03:21.681 CXX test/cpp_headers/nvmf_spec.o 00:03:21.681 CXX test/cpp_headers/nvmf_transport.o 00:03:21.681 CXX test/cpp_headers/opal.o 00:03:21.681 CXX test/cpp_headers/opal_spec.o 00:03:21.681 CXX test/cpp_headers/pci_ids.o 00:03:21.681 CXX test/cpp_headers/pipe.o 00:03:21.940 CXX test/cpp_headers/queue.o 00:03:21.940 CXX test/cpp_headers/reduce.o 00:03:21.940 CXX test/cpp_headers/rpc.o 00:03:21.940 CXX test/cpp_headers/scheduler.o 00:03:21.940 CXX test/cpp_headers/scsi.o 00:03:21.940 CXX test/cpp_headers/scsi_spec.o 00:03:21.940 CXX test/cpp_headers/sock.o 00:03:21.940 CXX test/cpp_headers/stdinc.o 00:03:21.940 CXX test/cpp_headers/string.o 00:03:21.940 CXX test/cpp_headers/thread.o 00:03:21.940 CXX test/cpp_headers/trace.o 00:03:21.940 CXX test/cpp_headers/trace_parser.o 00:03:21.940 CXX test/cpp_headers/tree.o 00:03:21.940 CXX test/cpp_headers/ublk.o 00:03:21.940 CXX test/cpp_headers/util.o 00:03:21.940 CXX test/cpp_headers/uuid.o 00:03:22.199 CXX test/cpp_headers/version.o 00:03:22.199 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.199 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.199 CXX test/cpp_headers/vhost.o 00:03:22.199 CXX test/cpp_headers/vmd.o 00:03:22.199 CXX test/cpp_headers/xor.o 00:03:22.199 LINK cuse 00:03:22.199 CXX test/cpp_headers/zipf.o 00:03:22.770 LINK esnap 00:03:23.337 00:03:23.337 real 1m6.958s 00:03:23.337 user 6m18.989s 00:03:23.337 sys 1m54.612s 00:03:23.337 18:22:45 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:23.337 18:22:45 make -- common/autotest_common.sh@10 -- $ set +x 00:03:23.337 ************************************ 00:03:23.337 END TEST make 00:03:23.337 ************************************ 00:03:23.337 18:22:45 -- common/autotest_common.sh@1142 -- $ return 0 00:03:23.337 18:22:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:23.337 18:22:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:23.337 18:22:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:23.337 18:22:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.337 18:22:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:23.337 18:22:45 -- pm/common@44 -- $ pid=5142 00:03:23.337 18:22:45 -- pm/common@50 -- $ kill -TERM 5142 00:03:23.337 18:22:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.337 18:22:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:23.337 18:22:45 -- pm/common@44 -- $ pid=5144 00:03:23.337 18:22:45 -- pm/common@50 -- $ kill -TERM 5144 00:03:23.337 18:22:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:23.337 18:22:45 -- nvmf/common.sh@7 -- # uname -s 00:03:23.337 18:22:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:23.337 18:22:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:23.337 18:22:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:23.337 18:22:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:23.337 18:22:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:23.337 18:22:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:23.337 18:22:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:23.337 18:22:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:23.337 18:22:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:23.337 18:22:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:23.596 18:22:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:03:23.596 18:22:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:03:23.596 18:22:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:23.596 18:22:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:23.596 18:22:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:23.596 18:22:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:23.596 18:22:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:23.596 18:22:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:23.596 18:22:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:23.596 18:22:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:23.596 18:22:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.596 18:22:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.596 18:22:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.596 18:22:45 -- paths/export.sh@5 -- # export PATH 00:03:23.596 18:22:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.596 18:22:45 -- nvmf/common.sh@47 -- # : 0 00:03:23.596 18:22:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:23.596 18:22:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:23.596 18:22:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:23.596 18:22:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:23.596 18:22:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:23.596 18:22:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:23.596 18:22:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:23.596 18:22:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:23.596 18:22:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:23.596 18:22:45 -- spdk/autotest.sh@32 -- # uname -s 00:03:23.596 18:22:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:23.596 18:22:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:23.596 18:22:45 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:23.596 18:22:45 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:23.596 18:22:45 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:23.596 18:22:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:23.596 18:22:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:23.596 18:22:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:23.596 18:22:46 -- spdk/autotest.sh@48 -- # udevadm_pid=54522 00:03:23.596 18:22:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:23.597 18:22:46 -- pm/common@17 -- # local monitor 00:03:23.597 18:22:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.597 18:22:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:23.597 18:22:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.597 18:22:46 -- pm/common@21 -- # date +%s 00:03:23.597 18:22:46 -- pm/common@25 -- # sleep 1 00:03:23.597 18:22:46 -- pm/common@21 -- # date +%s 00:03:23.597 18:22:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721067766 00:03:23.597 18:22:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721067766 00:03:23.597 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721067766_collect-vmstat.pm.log 00:03:23.597 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721067766_collect-cpu-load.pm.log 00:03:24.534 18:22:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:24.534 18:22:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:24.534 18:22:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:24.534 18:22:47 -- common/autotest_common.sh@10 -- # set +x 00:03:24.534 18:22:47 -- spdk/autotest.sh@59 -- # create_test_list 00:03:24.534 18:22:47 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:24.534 18:22:47 -- common/autotest_common.sh@10 -- # set +x 00:03:24.534 18:22:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:24.534 18:22:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:24.534 18:22:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:24.534 18:22:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:24.534 18:22:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:24.534 18:22:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:24.534 18:22:47 -- common/autotest_common.sh@1455 -- # uname 00:03:24.534 18:22:47 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:24.534 18:22:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:24.534 18:22:47 -- common/autotest_common.sh@1475 -- # uname 00:03:24.534 18:22:47 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:24.534 18:22:47 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:24.534 18:22:47 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:24.534 18:22:47 -- spdk/autotest.sh@72 -- # hash lcov 00:03:24.534 18:22:47 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:24.793 18:22:47 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:24.793 --rc lcov_branch_coverage=1 00:03:24.793 --rc lcov_function_coverage=1 00:03:24.793 --rc genhtml_branch_coverage=1 00:03:24.793 --rc genhtml_function_coverage=1 00:03:24.793 --rc genhtml_legend=1 00:03:24.793 --rc geninfo_all_blocks=1 00:03:24.793 ' 00:03:24.793 18:22:47 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:24.793 --rc lcov_branch_coverage=1 00:03:24.793 --rc lcov_function_coverage=1 00:03:24.793 --rc genhtml_branch_coverage=1 00:03:24.793 --rc genhtml_function_coverage=1 00:03:24.793 --rc genhtml_legend=1 00:03:24.793 --rc geninfo_all_blocks=1 00:03:24.793 ' 00:03:24.793 18:22:47 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:24.793 --rc lcov_branch_coverage=1 00:03:24.793 --rc lcov_function_coverage=1 00:03:24.793 --rc genhtml_branch_coverage=1 00:03:24.793 --rc genhtml_function_coverage=1 00:03:24.793 --rc genhtml_legend=1 00:03:24.793 --rc geninfo_all_blocks=1 00:03:24.793 --no-external' 00:03:24.793 18:22:47 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:24.793 --rc lcov_branch_coverage=1 00:03:24.793 --rc lcov_function_coverage=1 00:03:24.793 --rc genhtml_branch_coverage=1 00:03:24.793 --rc genhtml_function_coverage=1 00:03:24.793 --rc genhtml_legend=1 00:03:24.794 --rc geninfo_all_blocks=1 00:03:24.794 --no-external' 00:03:24.794 18:22:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:24.794 lcov: LCOV version 1.14 00:03:24.794 18:22:47 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:39.678 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:51.886 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:51.886 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:51.887 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:51.887 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:55.179 18:23:17 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:55.179 18:23:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:55.179 18:23:17 -- common/autotest_common.sh@10 -- # set +x 00:03:55.179 18:23:17 -- spdk/autotest.sh@91 -- # rm -f 00:03:55.179 18:23:17 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.007 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:56.007 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:56.007 18:23:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:56.007 18:23:18 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:56.007 18:23:18 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:56.007 18:23:18 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:56.007 18:23:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.007 18:23:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:56.007 18:23:18 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:56.007 18:23:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.007 18:23:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.007 18:23:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.007 18:23:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:56.007 18:23:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:56.007 18:23:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.007 18:23:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.007 18:23:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.007 18:23:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:56.007 18:23:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:56.007 18:23:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:56.007 18:23:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.007 18:23:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.007 18:23:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:56.007 18:23:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:56.007 18:23:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:56.007 18:23:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.007 18:23:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:56.007 18:23:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.007 18:23:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:56.007 18:23:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:56.007 18:23:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:56.007 18:23:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:56.007 No valid GPT data, bailing 00:03:56.007 18:23:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.007 18:23:18 -- scripts/common.sh@391 -- # pt= 00:03:56.007 18:23:18 -- scripts/common.sh@392 -- # return 1 00:03:56.007 18:23:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:56.007 1+0 records in 00:03:56.007 1+0 records out 00:03:56.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611337 s, 172 MB/s 00:03:56.007 18:23:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.007 18:23:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:56.007 18:23:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:56.007 18:23:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:56.007 18:23:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:56.267 No valid GPT data, bailing 00:03:56.267 18:23:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:56.267 18:23:18 -- scripts/common.sh@391 -- # pt= 00:03:56.267 18:23:18 -- scripts/common.sh@392 -- # return 1 00:03:56.267 18:23:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:56.267 1+0 records in 00:03:56.267 1+0 records out 00:03:56.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621405 s, 169 MB/s 00:03:56.267 18:23:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.267 18:23:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:56.267 18:23:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:56.267 18:23:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:56.267 18:23:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:56.267 No valid GPT data, bailing 00:03:56.267 18:23:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:56.267 18:23:18 -- scripts/common.sh@391 -- # pt= 00:03:56.267 18:23:18 -- scripts/common.sh@392 -- # return 1 00:03:56.267 18:23:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:56.267 1+0 records in 00:03:56.267 1+0 records out 00:03:56.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00691294 s, 152 MB/s 00:03:56.267 18:23:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.267 18:23:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:56.267 18:23:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:56.267 18:23:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:56.267 18:23:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:56.267 No valid GPT data, bailing 00:03:56.267 18:23:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:56.267 18:23:18 -- scripts/common.sh@391 -- # pt= 00:03:56.267 18:23:18 -- scripts/common.sh@392 -- # return 1 00:03:56.267 18:23:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:56.267 1+0 records in 00:03:56.267 1+0 records out 00:03:56.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0060643 s, 173 MB/s 00:03:56.267 18:23:18 -- spdk/autotest.sh@118 -- # sync 00:03:56.526 18:23:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:56.526 18:23:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:56.526 18:23:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:59.060 18:23:21 -- spdk/autotest.sh@124 -- # uname -s 00:03:59.060 18:23:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:59.060 18:23:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:59.060 18:23:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.060 18:23:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.060 18:23:21 -- common/autotest_common.sh@10 -- # set +x 00:03:59.060 ************************************ 00:03:59.060 START TEST setup.sh 00:03:59.060 ************************************ 00:03:59.060 18:23:21 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:59.060 * Looking for test storage... 00:03:59.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:59.060 18:23:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:59.060 18:23:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:59.060 18:23:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:59.060 18:23:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.060 18:23:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.060 18:23:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.060 ************************************ 00:03:59.060 START TEST acl 00:03:59.060 ************************************ 00:03:59.060 18:23:21 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:59.320 * Looking for test storage... 00:03:59.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:59.320 18:23:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:59.320 18:23:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.320 18:23:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:59.320 18:23:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:59.320 18:23:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:59.320 18:23:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:59.320 18:23:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:59.320 18:23:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.320 18:23:21 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.256 18:23:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:00.256 18:23:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:00.256 18:23:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.256 18:23:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:00.256 18:23:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.256 18:23:22 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.197 Hugepages 00:04:01.197 node hugesize free / total 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.197 00:04:01.197 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:01.197 18:23:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:01.198 18:23:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:01.198 18:23:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:01.466 18:23:23 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:01.466 18:23:23 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.466 18:23:23 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.466 18:23:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:01.466 ************************************ 00:04:01.466 START TEST denied 00:04:01.466 ************************************ 00:04:01.466 18:23:23 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:01.466 18:23:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:01.466 18:23:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:01.466 18:23:23 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:01.466 18:23:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.466 18:23:23 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:02.850 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:02.850 18:23:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:02.850 18:23:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:02.850 18:23:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:02.850 18:23:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:02.850 18:23:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:02.850 18:23:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:02.850 18:23:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:02.850 18:23:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:02.850 18:23:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.850 18:23:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.416 00:04:03.416 real 0m1.835s 00:04:03.416 user 0m0.680s 00:04:03.416 sys 0m1.120s 00:04:03.416 18:23:25 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.416 ************************************ 00:04:03.416 END TEST denied 00:04:03.416 ************************************ 00:04:03.416 18:23:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:03.416 18:23:25 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:03.416 18:23:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:03.416 18:23:25 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.416 18:23:25 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.416 18:23:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:03.416 ************************************ 00:04:03.416 START TEST allowed 00:04:03.416 ************************************ 00:04:03.416 18:23:25 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:03.416 18:23:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:03.416 18:23:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:03.416 18:23:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.416 18:23:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:03.416 18:23:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:04.352 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.352 18:23:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:04.352 18:23:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:04.352 18:23:26 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:04.352 18:23:26 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:04.352 18:23:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:04.352 18:23:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:04.352 18:23:26 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:04.352 18:23:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:04.352 18:23:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.352 18:23:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.286 00:04:05.287 real 0m1.981s 00:04:05.287 user 0m0.779s 00:04:05.287 sys 0m1.231s 00:04:05.287 18:23:27 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.287 18:23:27 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:05.287 ************************************ 00:04:05.287 END TEST allowed 00:04:05.287 ************************************ 00:04:05.545 18:23:27 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:05.545 00:04:05.545 real 0m6.282s 00:04:05.545 user 0m2.474s 00:04:05.545 sys 0m3.844s 00:04:05.545 18:23:27 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.545 18:23:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:05.545 ************************************ 00:04:05.545 END TEST acl 00:04:05.545 ************************************ 00:04:05.545 18:23:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:05.545 18:23:27 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:05.545 18:23:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.545 18:23:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.545 18:23:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.545 ************************************ 00:04:05.545 START TEST hugepages 00:04:05.545 ************************************ 00:04:05.545 18:23:27 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:05.545 * Looking for test storage... 00:04:05.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.545 18:23:28 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5880424 kB' 'MemAvailable: 7389228 kB' 'Buffers: 2436 kB' 'Cached: 1720204 kB' 'SwapCached: 0 kB' 'Active: 483860 kB' 'Inactive: 1349984 kB' 'Active(anon): 121692 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1349984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 113124 kB' 'Mapped: 48840 kB' 'Shmem: 10488 kB' 'KReclaimable: 67184 kB' 'Slab: 143248 kB' 'SReclaimable: 67184 kB' 'SUnreclaim: 76064 kB' 'KernelStack: 6224 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 343404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=2048 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=1 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:05.805 18:23:28 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:04:05.805 18:23:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.805 18:23:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.805 18:23:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.805 ************************************ 00:04:05.805 START TEST single_node_setup 00:04:05.805 ************************************ 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1123 -- # single_node_setup 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=1 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.805 18:23:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.746 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.746 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7982364 kB' 'MemAvailable: 9491040 kB' 'Buffers: 2436 kB' 'Cached: 1720196 kB' 'SwapCached: 0 kB' 'Active: 493772 kB' 'Inactive: 1349992 kB' 'Active(anon): 131604 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1349992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122752 kB' 'Mapped: 48928 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142952 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76044 kB' 'KernelStack: 6272 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.746 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:06.747 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981864 kB' 'MemAvailable: 9490540 kB' 'Buffers: 2436 kB' 'Cached: 1720196 kB' 'SwapCached: 0 kB' 'Active: 493516 kB' 'Inactive: 1349992 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1349992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122500 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142952 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76044 kB' 'KernelStack: 6256 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.748 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.749 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981864 kB' 'MemAvailable: 9490540 kB' 'Buffers: 2436 kB' 'Cached: 1720196 kB' 'SwapCached: 0 kB' 'Active: 493452 kB' 'Inactive: 1349992 kB' 'Active(anon): 131284 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1349992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122436 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142952 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76044 kB' 'KernelStack: 6240 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.750 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:06.751 nr_hugepages=1024 00:04:06.751 resv_hugepages=0 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:06.751 surplus_hugepages=0 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:06.751 anon_hugepages=0 00:04:06.751 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981360 kB' 'MemAvailable: 9490036 kB' 'Buffers: 2436 kB' 'Cached: 1720196 kB' 'SwapCached: 0 kB' 'Active: 493452 kB' 'Inactive: 1349992 kB' 'Active(anon): 131284 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1349992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122436 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142952 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76044 kB' 'KernelStack: 6240 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.034 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=1 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981360 kB' 'MemUsed: 4260620 kB' 'SwapCached: 0 kB' 'Active: 493392 kB' 'Inactive: 1349992 kB' 'Active(anon): 131224 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1349992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1722632 kB' 'Mapped: 48800 kB' 'AnonPages: 122376 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66908 kB' 'Slab: 142952 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.035 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:07.036 node0=1024 expecting 1024 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.036 00:04:07.036 real 0m1.203s 00:04:07.036 user 0m0.527s 00:04:07.036 sys 0m0.668s 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.036 18:23:29 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:07.036 ************************************ 00:04:07.036 END TEST single_node_setup 00:04:07.036 ************************************ 00:04:07.036 18:23:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:07.036 18:23:29 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:07.036 18:23:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.036 18:23:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.036 18:23:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.036 ************************************ 00:04:07.036 START TEST even_2G_alloc 00:04:07.036 ************************************ 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=1 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=1024 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.036 18:23:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.605 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.605 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7978688 kB' 'MemAvailable: 9487376 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493736 kB' 'Inactive: 1350004 kB' 'Active(anon): 131568 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122704 kB' 'Mapped: 48892 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142920 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76012 kB' 'KernelStack: 6228 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.605 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7978688 kB' 'MemAvailable: 9487376 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493372 kB' 'Inactive: 1350004 kB' 'Active(anon): 131204 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122352 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142920 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76012 kB' 'KernelStack: 6256 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.606 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7978688 kB' 'MemAvailable: 9487376 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493288 kB' 'Inactive: 1350004 kB' 'Active(anon): 131120 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122488 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142920 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76012 kB' 'KernelStack: 6224 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.607 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:07.608 nr_hugepages=1024 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:07.608 resv_hugepages=0 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:07.608 surplus_hugepages=0 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:07.608 anon_hugepages=0 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7978688 kB' 'MemAvailable: 9487376 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493376 kB' 'Inactive: 1350004 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122604 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142920 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76012 kB' 'KernelStack: 6256 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.608 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=1 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.609 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7978688 kB' 'MemUsed: 4263292 kB' 'SwapCached: 0 kB' 'Active: 493316 kB' 'Inactive: 1350004 kB' 'Active(anon): 131148 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1722636 kB' 'Mapped: 48804 kB' 'AnonPages: 122516 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66908 kB' 'Slab: 142916 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.868 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:07.869 node0=1024 expecting 1024 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.869 00:04:07.869 real 0m0.765s 00:04:07.869 user 0m0.367s 00:04:07.869 sys 0m0.448s 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.869 18:23:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.869 ************************************ 00:04:07.869 END TEST even_2G_alloc 00:04:07.869 ************************************ 00:04:07.869 18:23:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:07.869 18:23:30 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:07.869 18:23:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.869 18:23:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.869 18:23:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.869 ************************************ 00:04:07.869 START TEST odd_alloc 00:04:07.869 ************************************ 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=1 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=1025 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.869 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.440 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.440 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7973520 kB' 'MemAvailable: 9482208 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493604 kB' 'Inactive: 1350004 kB' 'Active(anon): 131436 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122560 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142956 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76048 kB' 'KernelStack: 6272 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.440 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7973520 kB' 'MemAvailable: 9482208 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493788 kB' 'Inactive: 1350004 kB' 'Active(anon): 131620 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122508 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142956 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76048 kB' 'KernelStack: 6240 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.441 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.442 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7973520 kB' 'MemAvailable: 9482208 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493544 kB' 'Inactive: 1350004 kB' 'Active(anon): 131376 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122520 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142956 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76048 kB' 'KernelStack: 6240 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.443 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.444 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:08.445 nr_hugepages=1025 00:04:08.445 resv_hugepages=0 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:08.445 surplus_hugepages=0 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:08.445 anon_hugepages=0 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7973520 kB' 'MemAvailable: 9482208 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493544 kB' 'Inactive: 1350004 kB' 'Active(anon): 131376 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122520 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 142956 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76048 kB' 'KernelStack: 6240 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.445 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1025 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=1 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.446 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7973520 kB' 'MemUsed: 4268460 kB' 'SwapCached: 0 kB' 'Active: 493736 kB' 'Inactive: 1350004 kB' 'Active(anon): 131568 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1722636 kB' 'Mapped: 48804 kB' 'AnonPages: 122712 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66908 kB' 'Slab: 142956 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.447 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1025 expecting 1025' 00:04:08.448 node0=1025 expecting 1025 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 1025 == \1\0\2\5 ]] 00:04:08.448 00:04:08.448 real 0m0.710s 00:04:08.448 user 0m0.303s 00:04:08.448 sys 0m0.454s 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.448 18:23:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.448 ************************************ 00:04:08.448 END TEST odd_alloc 00:04:08.448 ************************************ 00:04:08.707 18:23:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.707 18:23:31 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:08.707 18:23:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.707 18:23:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.707 18:23:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.707 ************************************ 00:04:08.707 START TEST custom_alloc 00:04:08.707 ************************************ 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=1 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 1 > 1 )) 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=1 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512' 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.707 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.305 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.305 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=512 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.305 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9018352 kB' 'MemAvailable: 10527040 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493664 kB' 'Inactive: 1350004 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122604 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 143048 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76140 kB' 'KernelStack: 6272 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.306 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.307 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9018352 kB' 'MemAvailable: 10527040 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493708 kB' 'Inactive: 1350004 kB' 'Active(anon): 131540 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122648 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 143048 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76140 kB' 'KernelStack: 6256 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.308 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.309 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9018352 kB' 'MemAvailable: 10527040 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493588 kB' 'Inactive: 1350004 kB' 'Active(anon): 131420 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122784 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 143040 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76132 kB' 'KernelStack: 6240 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.310 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.311 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=512 00:04:09.312 nr_hugepages=512 00:04:09.312 resv_hugepages=0 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:09.312 surplus_hugepages=0 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:09.312 anon_hugepages=0 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 512 == nr_hugepages )) 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9018352 kB' 'MemAvailable: 10527040 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493612 kB' 'Inactive: 1350004 kB' 'Active(anon): 131444 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122552 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 143032 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76124 kB' 'KernelStack: 6240 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.312 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=1 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.313 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9018352 kB' 'MemUsed: 3223628 kB' 'SwapCached: 0 kB' 'Active: 493356 kB' 'Inactive: 1350004 kB' 'Active(anon): 131188 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1722636 kB' 'Mapped: 48804 kB' 'AnonPages: 122552 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66908 kB' 'Slab: 143016 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.314 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:09.315 node0=512 expecting 512 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:09.315 00:04:09.315 real 0m0.741s 00:04:09.315 user 0m0.349s 00:04:09.315 sys 0m0.442s 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.315 18:23:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:09.315 ************************************ 00:04:09.315 END TEST custom_alloc 00:04:09.315 ************************************ 00:04:09.315 18:23:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:09.315 18:23:31 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:09.315 18:23:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.315 18:23:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.315 18:23:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.315 ************************************ 00:04:09.315 START TEST no_shrink_alloc 00:04:09.315 ************************************ 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:09.315 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=1 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.316 18:23:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.886 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.886 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7968560 kB' 'MemAvailable: 9477248 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493872 kB' 'Inactive: 1350004 kB' 'Active(anon): 131704 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122800 kB' 'Mapped: 48964 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 143012 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76104 kB' 'KernelStack: 6212 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.886 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.887 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.150 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7968560 kB' 'MemAvailable: 9477248 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493676 kB' 'Inactive: 1350004 kB' 'Active(anon): 131508 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122692 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 143016 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76108 kB' 'KernelStack: 6256 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.151 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.152 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7968560 kB' 'MemAvailable: 9477248 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493680 kB' 'Inactive: 1350004 kB' 'Active(anon): 131512 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122692 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 143016 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76108 kB' 'KernelStack: 6256 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.153 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.154 nr_hugepages=1024 00:04:10.154 resv_hugepages=0 00:04:10.154 surplus_hugepages=0 00:04:10.154 anon_hugepages=0 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7968560 kB' 'MemAvailable: 9477248 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 493616 kB' 'Inactive: 1350004 kB' 'Active(anon): 131448 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122580 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 66908 kB' 'Slab: 143016 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76108 kB' 'KernelStack: 6240 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.154 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.155 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.156 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=1 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7968560 kB' 'MemUsed: 4273420 kB' 'SwapCached: 0 kB' 'Active: 493772 kB' 'Inactive: 1350004 kB' 'Active(anon): 131604 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1722636 kB' 'Mapped: 48804 kB' 'AnonPages: 122760 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66908 kB' 'Slab: 143016 kB' 'SReclaimable: 66908 kB' 'SUnreclaim: 76108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.157 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:10.158 node0=1024 expecting 1024 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.158 18:23:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.732 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.732 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.732 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7971288 kB' 'MemAvailable: 9479968 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 489360 kB' 'Inactive: 1350004 kB' 'Active(anon): 127192 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118328 kB' 'Mapped: 48172 kB' 'Shmem: 10464 kB' 'KReclaimable: 66896 kB' 'Slab: 142788 kB' 'SReclaimable: 66896 kB' 'SUnreclaim: 75892 kB' 'KernelStack: 6116 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.732 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7971288 kB' 'MemAvailable: 9479968 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 488948 kB' 'Inactive: 1350004 kB' 'Active(anon): 126780 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 117888 kB' 'Mapped: 48064 kB' 'Shmem: 10464 kB' 'KReclaimable: 66896 kB' 'Slab: 142788 kB' 'SReclaimable: 66896 kB' 'SUnreclaim: 75892 kB' 'KernelStack: 6128 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.733 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.734 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7971288 kB' 'MemAvailable: 9479968 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 488768 kB' 'Inactive: 1350004 kB' 'Active(anon): 126600 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 117988 kB' 'Mapped: 48064 kB' 'Shmem: 10464 kB' 'KReclaimable: 66896 kB' 'Slab: 142788 kB' 'SReclaimable: 66896 kB' 'SUnreclaim: 75892 kB' 'KernelStack: 6144 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.735 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.736 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:10.737 nr_hugepages=1024 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:10.737 resv_hugepages=0 00:04:10.737 surplus_hugepages=0 00:04:10.737 anon_hugepages=0 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7971552 kB' 'MemAvailable: 9480232 kB' 'Buffers: 2436 kB' 'Cached: 1720200 kB' 'SwapCached: 0 kB' 'Active: 489252 kB' 'Inactive: 1350004 kB' 'Active(anon): 127084 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118284 kB' 'Mapped: 48324 kB' 'Shmem: 10464 kB' 'KReclaimable: 66896 kB' 'Slab: 142780 kB' 'SReclaimable: 66896 kB' 'SUnreclaim: 75884 kB' 'KernelStack: 6176 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 337740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.737 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.738 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.999 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=1 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7971552 kB' 'MemUsed: 4270428 kB' 'SwapCached: 0 kB' 'Active: 488740 kB' 'Inactive: 1350000 kB' 'Active(anon): 126572 kB' 'Inactive(anon): 0 kB' 'Active(file): 362168 kB' 'Inactive(file): 1350000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1722632 kB' 'Mapped: 48064 kB' 'AnonPages: 118032 kB' 'Shmem: 10464 kB' 'KernelStack: 6128 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66896 kB' 'Slab: 142780 kB' 'SReclaimable: 66896 kB' 'SUnreclaim: 75884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.000 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:11.001 node0=1024 expecting 1024 00:04:11.001 ************************************ 00:04:11.001 END TEST no_shrink_alloc 00:04:11.001 ************************************ 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.001 00:04:11.001 real 0m1.504s 00:04:11.001 user 0m0.699s 00:04:11.001 sys 0m0.845s 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.001 18:23:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.001 18:23:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:11.001 18:23:33 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:11.001 18:23:33 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:11.001 18:23:33 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:11.001 18:23:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.001 18:23:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:11.001 18:23:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.001 18:23:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:11.001 18:23:33 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:11.001 18:23:33 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:11.001 00:04:11.001 real 0m5.481s 00:04:11.001 user 0m2.459s 00:04:11.001 sys 0m3.202s 00:04:11.001 18:23:33 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.001 18:23:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.001 ************************************ 00:04:11.001 END TEST hugepages 00:04:11.001 ************************************ 00:04:11.001 18:23:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:11.001 18:23:33 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:11.001 18:23:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.001 18:23:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.001 18:23:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:11.001 ************************************ 00:04:11.001 START TEST driver 00:04:11.001 ************************************ 00:04:11.001 18:23:33 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:11.259 * Looking for test storage... 00:04:11.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:11.259 18:23:33 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:11.259 18:23:33 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.259 18:23:33 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.196 18:23:34 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:12.196 18:23:34 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.196 18:23:34 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.196 18:23:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.196 ************************************ 00:04:12.196 START TEST guess_driver 00:04:12.196 ************************************ 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:12.196 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:12.196 Looking for driver=uio_pci_generic 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.196 18:23:34 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.765 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:12.765 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:12.765 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.023 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.024 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:13.024 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.024 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.024 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:13.024 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.024 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:13.024 18:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:13.024 18:23:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.024 18:23:35 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.963 00:04:13.963 real 0m1.832s 00:04:13.963 user 0m0.646s 00:04:13.963 sys 0m1.235s 00:04:13.963 18:23:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.963 18:23:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 ************************************ 00:04:13.963 END TEST guess_driver 00:04:13.963 ************************************ 00:04:13.963 18:23:36 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:13.963 00:04:13.963 real 0m2.813s 00:04:13.963 user 0m0.989s 00:04:13.963 sys 0m1.984s 00:04:13.963 ************************************ 00:04:13.963 END TEST driver 00:04:13.963 ************************************ 00:04:13.963 18:23:36 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.963 18:23:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 18:23:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:13.963 18:23:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:13.963 18:23:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.963 18:23:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.963 18:23:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.963 ************************************ 00:04:13.963 START TEST devices 00:04:13.963 ************************************ 00:04:13.963 18:23:36 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:13.963 * Looking for test storage... 00:04:13.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:13.963 18:23:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:13.963 18:23:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:13.963 18:23:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.963 18:23:36 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:14.900 18:23:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:14.900 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:14.900 18:23:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:14.900 18:23:37 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:15.159 No valid GPT data, bailing 00:04:15.159 18:23:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.159 18:23:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:15.159 18:23:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:15.159 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:15.159 18:23:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:15.159 18:23:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:15.159 18:23:37 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:15.159 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:15.159 18:23:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.159 18:23:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:15.159 18:23:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.159 18:23:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:15.160 No valid GPT data, bailing 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:15.160 18:23:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:15.160 18:23:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:15.160 18:23:37 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:15.160 No valid GPT data, bailing 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:15.160 18:23:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:15.160 18:23:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:15.160 18:23:37 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:15.160 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:15.160 No valid GPT data, bailing 00:04:15.160 18:23:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:15.419 18:23:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:15.419 18:23:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:15.419 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:15.419 18:23:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:15.419 18:23:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:15.419 18:23:37 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:15.419 18:23:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:15.419 18:23:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.419 18:23:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:15.419 18:23:37 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:15.419 18:23:37 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:15.419 18:23:37 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:15.419 18:23:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.419 18:23:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.419 18:23:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:15.419 ************************************ 00:04:15.419 START TEST nvme_mount 00:04:15.419 ************************************ 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:15.419 18:23:37 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:16.355 Creating new GPT entries in memory. 00:04:16.355 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:16.355 other utilities. 00:04:16.355 18:23:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:16.355 18:23:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.355 18:23:38 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.355 18:23:38 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.355 18:23:38 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:17.735 Creating new GPT entries in memory. 00:04:17.735 The operation has completed successfully. 00:04:17.735 18:23:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:17.735 18:23:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.735 18:23:39 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58627 00:04:17.735 18:23:39 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.735 18:23:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:17.735 18:23:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.735 18:23:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:17.735 18:23:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:17.735 18:23:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.735 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.994 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.994 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.994 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.994 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.253 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.253 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.513 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:18.513 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:18.513 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.513 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.513 18:23:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:18.513 18:23:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:18.513 18:23:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.513 18:23:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.773 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.773 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:18.773 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:18.773 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.773 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:18.773 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.032 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.032 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.292 18:23:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.551 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.551 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:19.551 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.551 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.551 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.551 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.810 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:19.810 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.070 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.070 00:04:20.070 real 0m4.735s 00:04:20.070 user 0m0.919s 00:04:20.070 sys 0m1.513s 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.070 18:23:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:20.070 ************************************ 00:04:20.070 END TEST nvme_mount 00:04:20.070 ************************************ 00:04:20.070 18:23:42 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:20.070 18:23:42 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:20.070 18:23:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.070 18:23:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.070 18:23:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:20.070 ************************************ 00:04:20.070 START TEST dm_mount 00:04:20.070 ************************************ 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:20.070 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:20.071 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.071 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:20.071 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:20.071 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.071 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:20.071 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:20.071 18:23:42 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:21.449 Creating new GPT entries in memory. 00:04:21.449 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:21.449 other utilities. 00:04:21.449 18:23:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:21.449 18:23:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.449 18:23:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:21.449 18:23:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:21.449 18:23:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:22.384 Creating new GPT entries in memory. 00:04:22.384 The operation has completed successfully. 00:04:22.384 18:23:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:22.384 18:23:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.384 18:23:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.384 18:23:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.384 18:23:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:23.321 The operation has completed successfully. 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59072 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.321 18:23:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.580 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.580 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:23.580 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:23.580 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.580 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.580 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.839 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.839 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.839 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.839 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.098 18:23:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:24.371 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.371 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:24.371 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:24.371 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.371 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.371 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.629 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.629 18:23:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.629 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:24.888 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.888 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.888 18:23:47 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:24.888 00:04:24.888 real 0m4.652s 00:04:24.888 user 0m0.586s 00:04:24.888 sys 0m0.995s 00:04:24.888 18:23:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.888 18:23:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.888 ************************************ 00:04:24.888 END TEST dm_mount 00:04:24.888 ************************************ 00:04:24.888 18:23:47 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:24.888 18:23:47 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:24.888 18:23:47 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:24.888 18:23:47 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.888 18:23:47 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.888 18:23:47 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.888 18:23:47 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.888 18:23:47 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.146 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:25.146 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:25.146 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:25.146 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:25.146 18:23:47 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:25.146 18:23:47 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.146 18:23:47 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.146 18:23:47 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.146 18:23:47 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.146 18:23:47 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.146 18:23:47 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:25.146 00:04:25.146 real 0m11.221s 00:04:25.146 user 0m2.218s 00:04:25.146 sys 0m3.350s 00:04:25.146 18:23:47 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.146 18:23:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.146 ************************************ 00:04:25.146 END TEST devices 00:04:25.146 ************************************ 00:04:25.146 18:23:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:25.146 00:04:25.146 real 0m26.183s 00:04:25.146 user 0m8.257s 00:04:25.146 sys 0m12.651s 00:04:25.147 18:23:47 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.147 18:23:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.147 ************************************ 00:04:25.147 END TEST setup.sh 00:04:25.147 ************************************ 00:04:25.147 18:23:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.147 18:23:47 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:26.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.083 Hugepages 00:04:26.083 node hugesize free / total 00:04:26.083 node0 1048576kB 0 / 0 00:04:26.083 node0 2048kB 2048 / 2048 00:04:26.083 00:04:26.083 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:26.083 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:26.342 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:26.342 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:26.342 18:23:48 -- spdk/autotest.sh@130 -- # uname -s 00:04:26.342 18:23:48 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:26.342 18:23:48 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:26.342 18:23:48 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.278 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.278 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.278 18:23:49 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:28.281 18:23:50 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:28.281 18:23:50 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:28.281 18:23:50 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.281 18:23:50 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:28.281 18:23:50 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:28.281 18:23:50 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:28.281 18:23:50 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.281 18:23:50 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:28.281 18:23:50 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:28.281 18:23:50 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:28.281 18:23:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:28.281 18:23:50 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.849 Waiting for block devices as requested 00:04:28.849 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.108 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.108 18:23:51 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:29.108 18:23:51 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:29.108 18:23:51 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:29.108 18:23:51 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:29.108 18:23:51 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:29.108 18:23:51 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:29.108 18:23:51 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:29.108 18:23:51 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:29.108 18:23:51 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:29.108 18:23:51 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:29.108 18:23:51 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:29.108 18:23:51 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:29.108 18:23:51 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:29.108 18:23:51 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:29.108 18:23:51 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:29.108 18:23:51 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:29.108 18:23:51 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:29.108 18:23:51 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:29.108 18:23:51 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:29.108 18:23:51 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:29.108 18:23:51 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:29.108 18:23:51 -- common/autotest_common.sh@1557 -- # continue 00:04:29.108 18:23:51 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:29.108 18:23:51 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:29.108 18:23:51 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:29.108 18:23:51 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:29.108 18:23:51 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:29.108 18:23:51 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:29.108 18:23:51 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:29.108 18:23:51 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:29.108 18:23:51 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:29.108 18:23:51 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:29.108 18:23:51 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:29.108 18:23:51 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:29.108 18:23:51 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:29.108 18:23:51 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:29.108 18:23:51 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:29.108 18:23:51 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:29.108 18:23:51 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:29.108 18:23:51 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:29.108 18:23:51 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:29.108 18:23:51 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:29.108 18:23:51 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:29.108 18:23:51 -- common/autotest_common.sh@1557 -- # continue 00:04:29.108 18:23:51 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:29.108 18:23:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.108 18:23:51 -- common/autotest_common.sh@10 -- # set +x 00:04:29.367 18:23:51 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:29.367 18:23:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.367 18:23:51 -- common/autotest_common.sh@10 -- # set +x 00:04:29.367 18:23:51 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.191 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.191 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.191 18:23:52 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:30.191 18:23:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.191 18:23:52 -- common/autotest_common.sh@10 -- # set +x 00:04:30.191 18:23:52 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:30.191 18:23:52 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:30.191 18:23:52 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.191 18:23:52 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:30.191 18:23:52 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:30.191 18:23:52 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:30.191 18:23:52 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:30.191 18:23:52 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:30.191 18:23:52 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.191 18:23:52 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:30.191 18:23:52 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:30.645 18:23:52 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:30.645 18:23:52 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:30.645 18:23:52 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:30.645 18:23:52 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:30.645 18:23:52 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:30.645 18:23:52 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.645 18:23:52 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:30.645 18:23:52 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:30.645 18:23:52 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:30.645 18:23:52 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.645 18:23:52 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:30.645 18:23:52 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:30.645 18:23:52 -- common/autotest_common.sh@1593 -- # return 0 00:04:30.645 18:23:52 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:30.645 18:23:52 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:30.645 18:23:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:30.645 18:23:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:30.645 18:23:52 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:30.645 18:23:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.645 18:23:52 -- common/autotest_common.sh@10 -- # set +x 00:04:30.645 18:23:52 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:30.645 18:23:52 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:30.645 18:23:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.645 18:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.645 18:23:52 -- common/autotest_common.sh@10 -- # set +x 00:04:30.645 ************************************ 00:04:30.645 START TEST env 00:04:30.645 ************************************ 00:04:30.645 18:23:52 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:30.645 * Looking for test storage... 00:04:30.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:30.645 18:23:53 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.645 18:23:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.645 18:23:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.645 18:23:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.645 ************************************ 00:04:30.645 START TEST env_memory 00:04:30.645 ************************************ 00:04:30.645 18:23:53 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.645 00:04:30.645 00:04:30.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.645 http://cunit.sourceforge.net/ 00:04:30.645 00:04:30.645 00:04:30.645 Suite: memory 00:04:30.645 Test: alloc and free memory map ...[2024-07-15 18:23:53.059452] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.645 passed 00:04:30.645 Test: mem map translation ...[2024-07-15 18:23:53.079852] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.645 [2024-07-15 18:23:53.079973] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.645 [2024-07-15 18:23:53.080077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.645 [2024-07-15 18:23:53.080122] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.645 passed 00:04:30.645 Test: mem map registration ...[2024-07-15 18:23:53.118271] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:30.645 [2024-07-15 18:23:53.118408] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:30.645 passed 00:04:30.645 Test: mem map adjacent registrations ...passed 00:04:30.645 00:04:30.645 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.645 suites 1 1 n/a 0 0 00:04:30.645 tests 4 4 4 0 0 00:04:30.645 asserts 152 152 152 0 n/a 00:04:30.645 00:04:30.645 Elapsed time = 0.137 seconds 00:04:30.645 00:04:30.645 real 0m0.160s 00:04:30.645 user 0m0.141s 00:04:30.645 sys 0m0.015s 00:04:30.645 18:23:53 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.645 18:23:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:30.645 ************************************ 00:04:30.645 END TEST env_memory 00:04:30.645 ************************************ 00:04:30.645 18:23:53 env -- common/autotest_common.sh@1142 -- # return 0 00:04:30.645 18:23:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.645 18:23:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.645 18:23:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.645 18:23:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.645 ************************************ 00:04:30.645 START TEST env_vtophys 00:04:30.645 ************************************ 00:04:30.645 18:23:53 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.645 EAL: lib.eal log level changed from notice to debug 00:04:30.645 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.645 EAL: Detected lcore 1 as core 0 on socket 0 00:04:30.645 EAL: Detected lcore 2 as core 0 on socket 0 00:04:30.645 EAL: Detected lcore 3 as core 0 on socket 0 00:04:30.645 EAL: Detected lcore 4 as core 0 on socket 0 00:04:30.645 EAL: Detected lcore 5 as core 0 on socket 0 00:04:30.645 EAL: Detected lcore 6 as core 0 on socket 0 00:04:30.645 EAL: Detected lcore 7 as core 0 on socket 0 00:04:30.645 EAL: Detected lcore 8 as core 0 on socket 0 00:04:30.645 EAL: Detected lcore 9 as core 0 on socket 0 00:04:30.902 EAL: Maximum logical cores by configuration: 128 00:04:30.902 EAL: Detected CPU lcores: 10 00:04:30.902 EAL: Detected NUMA nodes: 1 00:04:30.902 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:30.902 EAL: Detected shared linkage of DPDK 00:04:30.902 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.902 EAL: Selected IOVA mode 'PA' 00:04:30.902 EAL: Probing VFIO support... 00:04:30.902 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.902 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:30.902 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.902 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.902 EAL: Setting up physically contiguous memory... 00:04:30.903 EAL: Setting maximum number of open files to 524288 00:04:30.903 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.903 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.903 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.903 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.903 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.903 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.903 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.903 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.903 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.903 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.903 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.903 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.903 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.903 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.903 EAL: Hugepages will be freed exactly as allocated. 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: TSC frequency is ~2490000 KHz 00:04:30.903 EAL: Main lcore 0 is ready (tid=7fda27515a00;cpuset=[0]) 00:04:30.903 EAL: Trying to obtain current memory policy. 00:04:30.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.903 EAL: Restoring previous memory policy: 0 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.903 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.903 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.903 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.903 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:30.903 00:04:30.903 00:04:30.903 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.903 http://cunit.sourceforge.net/ 00:04:30.903 00:04:30.903 00:04:30.903 Suite: components_suite 00:04:30.903 Test: vtophys_malloc_test ...passed 00:04:30.903 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.903 EAL: Restoring previous memory policy: 4 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.903 EAL: Trying to obtain current memory policy. 00:04:30.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.903 EAL: Restoring previous memory policy: 4 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.903 EAL: Trying to obtain current memory policy. 00:04:30.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.903 EAL: Restoring previous memory policy: 4 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.903 EAL: Trying to obtain current memory policy. 00:04:30.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.903 EAL: Restoring previous memory policy: 4 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.903 EAL: Trying to obtain current memory policy. 00:04:30.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.903 EAL: Restoring previous memory policy: 4 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.903 EAL: Trying to obtain current memory policy. 00:04:30.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.903 EAL: Restoring previous memory policy: 4 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.903 EAL: Trying to obtain current memory policy. 00:04:30.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.903 EAL: Restoring previous memory policy: 4 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.903 EAL: request: mp_malloc_sync 00:04:30.903 EAL: No shared files mode enabled, IPC is disabled 00:04:30.903 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.903 EAL: Trying to obtain current memory policy. 00:04:30.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.181 EAL: Restoring previous memory policy: 4 00:04:31.181 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.181 EAL: request: mp_malloc_sync 00:04:31.181 EAL: No shared files mode enabled, IPC is disabled 00:04:31.181 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.181 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.181 EAL: request: mp_malloc_sync 00:04:31.181 EAL: No shared files mode enabled, IPC is disabled 00:04:31.181 EAL: Heap on socket 0 was shrunk by 258MB 00:04:31.181 EAL: Trying to obtain current memory policy. 00:04:31.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.181 EAL: Restoring previous memory policy: 4 00:04:31.181 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.181 EAL: request: mp_malloc_sync 00:04:31.181 EAL: No shared files mode enabled, IPC is disabled 00:04:31.181 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.454 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.454 EAL: request: mp_malloc_sync 00:04:31.454 EAL: No shared files mode enabled, IPC is disabled 00:04:31.454 EAL: Heap on socket 0 was shrunk by 514MB 00:04:31.454 EAL: Trying to obtain current memory policy. 00:04:31.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.711 EAL: Restoring previous memory policy: 4 00:04:31.711 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.711 EAL: request: mp_malloc_sync 00:04:31.711 EAL: No shared files mode enabled, IPC is disabled 00:04:31.711 EAL: Heap on socket 0 was expanded by 1026MB 00:04:31.711 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.969 passed 00:04:31.969 00:04:31.969 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.969 suites 1 1 n/a 0 0 00:04:31.970 tests 2 2 2 0 0 00:04:31.970 asserts 5358 5358 5358 0 n/a 00:04:31.970 00:04:31.970 Elapsed time = 0.979 seconds 00:04:31.970 EAL: request: mp_malloc_sync 00:04:31.970 EAL: No shared files mode enabled, IPC is disabled 00:04:31.970 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.970 EAL: request: mp_malloc_sync 00:04:31.970 EAL: No shared files mode enabled, IPC is disabled 00:04:31.970 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.970 EAL: No shared files mode enabled, IPC is disabled 00:04:31.970 EAL: No shared files mode enabled, IPC is disabled 00:04:31.970 EAL: No shared files mode enabled, IPC is disabled 00:04:31.970 ************************************ 00:04:31.970 END TEST env_vtophys 00:04:31.970 ************************************ 00:04:31.970 00:04:31.970 real 0m1.166s 00:04:31.970 user 0m0.630s 00:04:31.970 sys 0m0.405s 00:04:31.970 18:23:54 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.970 18:23:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.970 18:23:54 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.970 18:23:54 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.970 18:23:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.970 18:23:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.970 18:23:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.970 ************************************ 00:04:31.970 START TEST env_pci 00:04:31.970 ************************************ 00:04:31.970 18:23:54 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.970 00:04:31.970 00:04:31.970 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.970 http://cunit.sourceforge.net/ 00:04:31.970 00:04:31.970 00:04:31.970 Suite: pci 00:04:31.970 Test: pci_hook ...[2024-07-15 18:23:54.484541] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60265 has claimed it 00:04:31.970 passed 00:04:31.970 00:04:31.970 EAL: Cannot find device (10000:00:01.0) 00:04:31.970 EAL: Failed to attach device on primary process 00:04:31.970 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.970 suites 1 1 n/a 0 0 00:04:31.970 tests 1 1 1 0 0 00:04:31.970 asserts 25 25 25 0 n/a 00:04:31.970 00:04:31.970 Elapsed time = 0.003 seconds 00:04:31.970 ************************************ 00:04:31.970 END TEST env_pci 00:04:31.970 ************************************ 00:04:31.970 00:04:31.970 real 0m0.029s 00:04:31.970 user 0m0.016s 00:04:31.970 sys 0m0.012s 00:04:31.970 18:23:54 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.970 18:23:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.970 18:23:54 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.970 18:23:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.970 18:23:54 env -- env/env.sh@15 -- # uname 00:04:31.970 18:23:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.970 18:23:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.970 18:23:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.970 18:23:54 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:31.970 18:23:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.970 18:23:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.970 ************************************ 00:04:31.970 START TEST env_dpdk_post_init 00:04:31.970 ************************************ 00:04:31.970 18:23:54 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.228 EAL: Detected CPU lcores: 10 00:04:32.228 EAL: Detected NUMA nodes: 1 00:04:32.228 EAL: Detected shared linkage of DPDK 00:04:32.228 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.228 EAL: Selected IOVA mode 'PA' 00:04:32.228 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.228 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:32.228 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:32.228 Starting DPDK initialization... 00:04:32.228 Starting SPDK post initialization... 00:04:32.228 SPDK NVMe probe 00:04:32.228 Attaching to 0000:00:10.0 00:04:32.228 Attaching to 0000:00:11.0 00:04:32.228 Attached to 0000:00:10.0 00:04:32.228 Attached to 0000:00:11.0 00:04:32.228 Cleaning up... 00:04:32.228 00:04:32.228 real 0m0.189s 00:04:32.228 user 0m0.051s 00:04:32.228 sys 0m0.038s 00:04:32.228 18:23:54 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.228 18:23:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.228 ************************************ 00:04:32.228 END TEST env_dpdk_post_init 00:04:32.228 ************************************ 00:04:32.228 18:23:54 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.228 18:23:54 env -- env/env.sh@26 -- # uname 00:04:32.228 18:23:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.228 18:23:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.228 18:23:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.228 18:23:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.228 18:23:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.228 ************************************ 00:04:32.228 START TEST env_mem_callbacks 00:04:32.228 ************************************ 00:04:32.228 18:23:54 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.487 EAL: Detected CPU lcores: 10 00:04:32.487 EAL: Detected NUMA nodes: 1 00:04:32.487 EAL: Detected shared linkage of DPDK 00:04:32.487 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.487 EAL: Selected IOVA mode 'PA' 00:04:32.487 00:04:32.487 00:04:32.487 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.487 http://cunit.sourceforge.net/ 00:04:32.487 00:04:32.487 00:04:32.487 Suite: memory 00:04:32.487 Test: test ... 00:04:32.487 register 0x200000200000 2097152 00:04:32.487 malloc 3145728 00:04:32.487 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.487 register 0x200000400000 4194304 00:04:32.487 buf 0x200000500000 len 3145728 PASSED 00:04:32.487 malloc 64 00:04:32.487 buf 0x2000004fff40 len 64 PASSED 00:04:32.487 malloc 4194304 00:04:32.487 register 0x200000800000 6291456 00:04:32.487 buf 0x200000a00000 len 4194304 PASSED 00:04:32.487 free 0x200000500000 3145728 00:04:32.487 free 0x2000004fff40 64 00:04:32.487 unregister 0x200000400000 4194304 PASSED 00:04:32.487 free 0x200000a00000 4194304 00:04:32.487 unregister 0x200000800000 6291456 PASSED 00:04:32.487 malloc 8388608 00:04:32.487 register 0x200000400000 10485760 00:04:32.487 buf 0x200000600000 len 8388608 PASSED 00:04:32.487 free 0x200000600000 8388608 00:04:32.487 unregister 0x200000400000 10485760 PASSED 00:04:32.487 passed 00:04:32.487 00:04:32.487 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.487 suites 1 1 n/a 0 0 00:04:32.487 tests 1 1 1 0 0 00:04:32.487 asserts 15 15 15 0 n/a 00:04:32.487 00:04:32.487 Elapsed time = 0.009 seconds 00:04:32.487 ************************************ 00:04:32.487 END TEST env_mem_callbacks 00:04:32.487 ************************************ 00:04:32.487 00:04:32.487 real 0m0.157s 00:04:32.487 user 0m0.025s 00:04:32.487 sys 0m0.028s 00:04:32.487 18:23:54 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.487 18:23:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.487 18:23:55 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.487 ************************************ 00:04:32.487 END TEST env 00:04:32.487 ************************************ 00:04:32.487 00:04:32.487 real 0m2.156s 00:04:32.487 user 0m1.026s 00:04:32.487 sys 0m0.797s 00:04:32.487 18:23:55 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.487 18:23:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.487 18:23:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:32.487 18:23:55 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.487 18:23:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.487 18:23:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.487 18:23:55 -- common/autotest_common.sh@10 -- # set +x 00:04:32.746 ************************************ 00:04:32.746 START TEST rpc 00:04:32.746 ************************************ 00:04:32.746 18:23:55 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.746 * Looking for test storage... 00:04:32.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.746 18:23:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60380 00:04:32.746 18:23:55 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:32.746 18:23:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.746 18:23:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60380 00:04:32.746 18:23:55 rpc -- common/autotest_common.sh@829 -- # '[' -z 60380 ']' 00:04:32.746 18:23:55 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.746 18:23:55 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.746 18:23:55 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.746 18:23:55 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.746 18:23:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.746 [2024-07-15 18:23:55.283750] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:04:32.746 [2024-07-15 18:23:55.283830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60380 ] 00:04:33.005 [2024-07-15 18:23:55.426930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.005 [2024-07-15 18:23:55.520111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:33.005 [2024-07-15 18:23:55.520161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60380' to capture a snapshot of events at runtime. 00:04:33.005 [2024-07-15 18:23:55.520170] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:33.005 [2024-07-15 18:23:55.520179] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:33.005 [2024-07-15 18:23:55.520185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60380 for offline analysis/debug. 00:04:33.005 [2024-07-15 18:23:55.520216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.573 18:23:56 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.573 18:23:56 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:33.573 18:23:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.573 18:23:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.573 18:23:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.573 18:23:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.573 18:23:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.573 18:23:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.573 18:23:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.573 ************************************ 00:04:33.573 START TEST rpc_integrity 00:04:33.574 ************************************ 00:04:33.574 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:33.574 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.574 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.574 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.574 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.574 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.574 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.833 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.833 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.833 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.833 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.833 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.833 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.833 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.833 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.833 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.833 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.833 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.833 { 00:04:33.833 "aliases": [ 00:04:33.833 "f9515604-4908-4f9e-a37a-680b143a7388" 00:04:33.833 ], 00:04:33.833 "assigned_rate_limits": { 00:04:33.833 "r_mbytes_per_sec": 0, 00:04:33.833 "rw_ios_per_sec": 0, 00:04:33.833 "rw_mbytes_per_sec": 0, 00:04:33.833 "w_mbytes_per_sec": 0 00:04:33.833 }, 00:04:33.833 "block_size": 512, 00:04:33.833 "claimed": false, 00:04:33.833 "driver_specific": {}, 00:04:33.833 "memory_domains": [ 00:04:33.833 { 00:04:33.833 "dma_device_id": "system", 00:04:33.833 "dma_device_type": 1 00:04:33.833 }, 00:04:33.833 { 00:04:33.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.833 "dma_device_type": 2 00:04:33.833 } 00:04:33.833 ], 00:04:33.833 "name": "Malloc0", 00:04:33.833 "num_blocks": 16384, 00:04:33.833 "product_name": "Malloc disk", 00:04:33.833 "supported_io_types": { 00:04:33.833 "abort": true, 00:04:33.833 "compare": false, 00:04:33.833 "compare_and_write": false, 00:04:33.833 "copy": true, 00:04:33.833 "flush": true, 00:04:33.833 "get_zone_info": false, 00:04:33.833 "nvme_admin": false, 00:04:33.833 "nvme_io": false, 00:04:33.833 "nvme_io_md": false, 00:04:33.833 "nvme_iov_md": false, 00:04:33.833 "read": true, 00:04:33.833 "reset": true, 00:04:33.833 "seek_data": false, 00:04:33.833 "seek_hole": false, 00:04:33.833 "unmap": true, 00:04:33.833 "write": true, 00:04:33.833 "write_zeroes": true, 00:04:33.833 "zcopy": true, 00:04:33.833 "zone_append": false, 00:04:33.833 "zone_management": false 00:04:33.833 }, 00:04:33.833 "uuid": "f9515604-4908-4f9e-a37a-680b143a7388", 00:04:33.833 "zoned": false 00:04:33.833 } 00:04:33.833 ]' 00:04:33.833 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.833 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.833 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.833 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.834 [2024-07-15 18:23:56.311422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.834 [2024-07-15 18:23:56.311466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.834 [2024-07-15 18:23:56.311482] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1247ad0 00:04:33.834 [2024-07-15 18:23:56.311490] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.834 [2024-07-15 18:23:56.312905] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.834 [2024-07-15 18:23:56.312939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.834 Passthru0 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.834 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.834 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.834 { 00:04:33.834 "aliases": [ 00:04:33.834 "f9515604-4908-4f9e-a37a-680b143a7388" 00:04:33.834 ], 00:04:33.834 "assigned_rate_limits": { 00:04:33.834 "r_mbytes_per_sec": 0, 00:04:33.834 "rw_ios_per_sec": 0, 00:04:33.834 "rw_mbytes_per_sec": 0, 00:04:33.834 "w_mbytes_per_sec": 0 00:04:33.834 }, 00:04:33.834 "block_size": 512, 00:04:33.834 "claim_type": "exclusive_write", 00:04:33.834 "claimed": true, 00:04:33.834 "driver_specific": {}, 00:04:33.834 "memory_domains": [ 00:04:33.834 { 00:04:33.834 "dma_device_id": "system", 00:04:33.834 "dma_device_type": 1 00:04:33.834 }, 00:04:33.834 { 00:04:33.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.834 "dma_device_type": 2 00:04:33.834 } 00:04:33.834 ], 00:04:33.834 "name": "Malloc0", 00:04:33.834 "num_blocks": 16384, 00:04:33.834 "product_name": "Malloc disk", 00:04:33.834 "supported_io_types": { 00:04:33.834 "abort": true, 00:04:33.834 "compare": false, 00:04:33.834 "compare_and_write": false, 00:04:33.834 "copy": true, 00:04:33.834 "flush": true, 00:04:33.834 "get_zone_info": false, 00:04:33.834 "nvme_admin": false, 00:04:33.834 "nvme_io": false, 00:04:33.834 "nvme_io_md": false, 00:04:33.834 "nvme_iov_md": false, 00:04:33.834 "read": true, 00:04:33.834 "reset": true, 00:04:33.834 "seek_data": false, 00:04:33.834 "seek_hole": false, 00:04:33.834 "unmap": true, 00:04:33.834 "write": true, 00:04:33.834 "write_zeroes": true, 00:04:33.834 "zcopy": true, 00:04:33.834 "zone_append": false, 00:04:33.834 "zone_management": false 00:04:33.834 }, 00:04:33.834 "uuid": "f9515604-4908-4f9e-a37a-680b143a7388", 00:04:33.834 "zoned": false 00:04:33.834 }, 00:04:33.834 { 00:04:33.834 "aliases": [ 00:04:33.834 "22ea63bb-2bc4-59f6-a3de-73528dff553c" 00:04:33.834 ], 00:04:33.834 "assigned_rate_limits": { 00:04:33.834 "r_mbytes_per_sec": 0, 00:04:33.834 "rw_ios_per_sec": 0, 00:04:33.834 "rw_mbytes_per_sec": 0, 00:04:33.834 "w_mbytes_per_sec": 0 00:04:33.834 }, 00:04:33.834 "block_size": 512, 00:04:33.834 "claimed": false, 00:04:33.834 "driver_specific": { 00:04:33.834 "passthru": { 00:04:33.834 "base_bdev_name": "Malloc0", 00:04:33.834 "name": "Passthru0" 00:04:33.834 } 00:04:33.834 }, 00:04:33.834 "memory_domains": [ 00:04:33.834 { 00:04:33.834 "dma_device_id": "system", 00:04:33.834 "dma_device_type": 1 00:04:33.834 }, 00:04:33.834 { 00:04:33.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.834 "dma_device_type": 2 00:04:33.834 } 00:04:33.834 ], 00:04:33.834 "name": "Passthru0", 00:04:33.834 "num_blocks": 16384, 00:04:33.834 "product_name": "passthru", 00:04:33.834 "supported_io_types": { 00:04:33.834 "abort": true, 00:04:33.834 "compare": false, 00:04:33.834 "compare_and_write": false, 00:04:33.834 "copy": true, 00:04:33.834 "flush": true, 00:04:33.834 "get_zone_info": false, 00:04:33.834 "nvme_admin": false, 00:04:33.834 "nvme_io": false, 00:04:33.834 "nvme_io_md": false, 00:04:33.834 "nvme_iov_md": false, 00:04:33.834 "read": true, 00:04:33.834 "reset": true, 00:04:33.834 "seek_data": false, 00:04:33.834 "seek_hole": false, 00:04:33.834 "unmap": true, 00:04:33.834 "write": true, 00:04:33.834 "write_zeroes": true, 00:04:33.834 "zcopy": true, 00:04:33.834 "zone_append": false, 00:04:33.834 "zone_management": false 00:04:33.834 }, 00:04:33.834 "uuid": "22ea63bb-2bc4-59f6-a3de-73528dff553c", 00:04:33.834 "zoned": false 00:04:33.834 } 00:04:33.834 ]' 00:04:33.834 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.834 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.834 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.834 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.834 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.834 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.834 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.834 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.093 ************************************ 00:04:34.093 END TEST rpc_integrity 00:04:34.093 ************************************ 00:04:34.093 18:23:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.093 00:04:34.093 real 0m0.292s 00:04:34.093 user 0m0.149s 00:04:34.093 sys 0m0.059s 00:04:34.093 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.093 18:23:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.093 18:23:56 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.093 18:23:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.093 18:23:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.093 18:23:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.093 18:23:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.093 ************************************ 00:04:34.093 START TEST rpc_plugins 00:04:34.093 ************************************ 00:04:34.093 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:34.093 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.093 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.093 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.093 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.093 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.093 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.093 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.093 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.093 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.093 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.093 { 00:04:34.093 "aliases": [ 00:04:34.093 "50a4f59e-841e-4d7c-9f64-2518fd5abfbf" 00:04:34.093 ], 00:04:34.093 "assigned_rate_limits": { 00:04:34.093 "r_mbytes_per_sec": 0, 00:04:34.093 "rw_ios_per_sec": 0, 00:04:34.093 "rw_mbytes_per_sec": 0, 00:04:34.093 "w_mbytes_per_sec": 0 00:04:34.093 }, 00:04:34.093 "block_size": 4096, 00:04:34.093 "claimed": false, 00:04:34.093 "driver_specific": {}, 00:04:34.093 "memory_domains": [ 00:04:34.093 { 00:04:34.093 "dma_device_id": "system", 00:04:34.093 "dma_device_type": 1 00:04:34.093 }, 00:04:34.093 { 00:04:34.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.093 "dma_device_type": 2 00:04:34.093 } 00:04:34.093 ], 00:04:34.093 "name": "Malloc1", 00:04:34.093 "num_blocks": 256, 00:04:34.093 "product_name": "Malloc disk", 00:04:34.093 "supported_io_types": { 00:04:34.093 "abort": true, 00:04:34.093 "compare": false, 00:04:34.093 "compare_and_write": false, 00:04:34.093 "copy": true, 00:04:34.093 "flush": true, 00:04:34.093 "get_zone_info": false, 00:04:34.093 "nvme_admin": false, 00:04:34.093 "nvme_io": false, 00:04:34.093 "nvme_io_md": false, 00:04:34.093 "nvme_iov_md": false, 00:04:34.093 "read": true, 00:04:34.093 "reset": true, 00:04:34.093 "seek_data": false, 00:04:34.093 "seek_hole": false, 00:04:34.093 "unmap": true, 00:04:34.093 "write": true, 00:04:34.093 "write_zeroes": true, 00:04:34.093 "zcopy": true, 00:04:34.093 "zone_append": false, 00:04:34.093 "zone_management": false 00:04:34.094 }, 00:04:34.094 "uuid": "50a4f59e-841e-4d7c-9f64-2518fd5abfbf", 00:04:34.094 "zoned": false 00:04:34.094 } 00:04:34.094 ]' 00:04:34.094 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.094 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.094 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.094 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.094 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.094 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.094 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.094 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.094 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.094 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.094 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.094 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:34.094 ************************************ 00:04:34.094 END TEST rpc_plugins 00:04:34.094 ************************************ 00:04:34.094 18:23:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.094 00:04:34.094 real 0m0.156s 00:04:34.094 user 0m0.089s 00:04:34.094 sys 0m0.029s 00:04:34.094 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.094 18:23:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.353 18:23:56 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.353 18:23:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.353 18:23:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.353 18:23:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.353 18:23:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.353 ************************************ 00:04:34.353 START TEST rpc_trace_cmd_test 00:04:34.353 ************************************ 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.353 "bdev": { 00:04:34.353 "mask": "0x8", 00:04:34.353 "tpoint_mask": "0xffffffffffffffff" 00:04:34.353 }, 00:04:34.353 "bdev_nvme": { 00:04:34.353 "mask": "0x4000", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "blobfs": { 00:04:34.353 "mask": "0x80", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "dsa": { 00:04:34.353 "mask": "0x200", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "ftl": { 00:04:34.353 "mask": "0x40", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "iaa": { 00:04:34.353 "mask": "0x1000", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "iscsi_conn": { 00:04:34.353 "mask": "0x2", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "nvme_pcie": { 00:04:34.353 "mask": "0x800", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "nvme_tcp": { 00:04:34.353 "mask": "0x2000", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "nvmf_rdma": { 00:04:34.353 "mask": "0x10", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "nvmf_tcp": { 00:04:34.353 "mask": "0x20", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "scsi": { 00:04:34.353 "mask": "0x4", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "sock": { 00:04:34.353 "mask": "0x8000", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "thread": { 00:04:34.353 "mask": "0x400", 00:04:34.353 "tpoint_mask": "0x0" 00:04:34.353 }, 00:04:34.353 "tpoint_group_mask": "0x8", 00:04:34.353 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60380" 00:04:34.353 }' 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.353 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.686 ************************************ 00:04:34.686 END TEST rpc_trace_cmd_test 00:04:34.686 ************************************ 00:04:34.686 18:23:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.686 00:04:34.686 real 0m0.242s 00:04:34.686 user 0m0.193s 00:04:34.686 sys 0m0.038s 00:04:34.686 18:23:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.686 18:23:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.686 18:23:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.686 18:23:57 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:34.686 18:23:57 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:34.686 18:23:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.686 18:23:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.686 18:23:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.686 ************************************ 00:04:34.686 START TEST go_rpc 00:04:34.686 ************************************ 00:04:34.686 18:23:57 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.686 18:23:57 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.686 18:23:57 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.686 18:23:57 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["a0d34b8b-8cd4-462d-874c-c336f5b40a20"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"a0d34b8b-8cd4-462d-874c-c336f5b40a20","zoned":false}]' 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.686 18:23:57 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.686 18:23:57 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.686 18:23:57 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:34.686 ************************************ 00:04:34.686 END TEST go_rpc 00:04:34.686 ************************************ 00:04:34.686 18:23:57 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:34.686 00:04:34.686 real 0m0.218s 00:04:34.686 user 0m0.135s 00:04:34.686 sys 0m0.051s 00:04:34.686 18:23:57 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.686 18:23:57 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.944 18:23:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.944 18:23:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.944 18:23:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.944 18:23:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.944 18:23:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.944 18:23:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.944 ************************************ 00:04:34.944 START TEST rpc_daemon_integrity 00:04:34.944 ************************************ 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.944 { 00:04:34.944 "aliases": [ 00:04:34.944 "ce4f423c-8e93-41d9-a963-a0147ff65315" 00:04:34.944 ], 00:04:34.944 "assigned_rate_limits": { 00:04:34.944 "r_mbytes_per_sec": 0, 00:04:34.944 "rw_ios_per_sec": 0, 00:04:34.944 "rw_mbytes_per_sec": 0, 00:04:34.944 "w_mbytes_per_sec": 0 00:04:34.944 }, 00:04:34.944 "block_size": 512, 00:04:34.944 "claimed": false, 00:04:34.944 "driver_specific": {}, 00:04:34.944 "memory_domains": [ 00:04:34.944 { 00:04:34.944 "dma_device_id": "system", 00:04:34.944 "dma_device_type": 1 00:04:34.944 }, 00:04:34.944 { 00:04:34.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.944 "dma_device_type": 2 00:04:34.944 } 00:04:34.944 ], 00:04:34.944 "name": "Malloc3", 00:04:34.944 "num_blocks": 16384, 00:04:34.944 "product_name": "Malloc disk", 00:04:34.944 "supported_io_types": { 00:04:34.944 "abort": true, 00:04:34.944 "compare": false, 00:04:34.944 "compare_and_write": false, 00:04:34.944 "copy": true, 00:04:34.944 "flush": true, 00:04:34.944 "get_zone_info": false, 00:04:34.944 "nvme_admin": false, 00:04:34.944 "nvme_io": false, 00:04:34.944 "nvme_io_md": false, 00:04:34.944 "nvme_iov_md": false, 00:04:34.944 "read": true, 00:04:34.944 "reset": true, 00:04:34.944 "seek_data": false, 00:04:34.944 "seek_hole": false, 00:04:34.944 "unmap": true, 00:04:34.944 "write": true, 00:04:34.944 "write_zeroes": true, 00:04:34.944 "zcopy": true, 00:04:34.944 "zone_append": false, 00:04:34.944 "zone_management": false 00:04:34.944 }, 00:04:34.944 "uuid": "ce4f423c-8e93-41d9-a963-a0147ff65315", 00:04:34.944 "zoned": false 00:04:34.944 } 00:04:34.944 ]' 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.944 [2024-07-15 18:23:57.477879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:34.944 [2024-07-15 18:23:57.477918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.944 [2024-07-15 18:23:57.477934] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x143ed70 00:04:34.944 [2024-07-15 18:23:57.477942] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.944 [2024-07-15 18:23:57.478971] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.944 [2024-07-15 18:23:57.479001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.944 Passthru0 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.944 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.944 { 00:04:34.944 "aliases": [ 00:04:34.944 "ce4f423c-8e93-41d9-a963-a0147ff65315" 00:04:34.944 ], 00:04:34.944 "assigned_rate_limits": { 00:04:34.944 "r_mbytes_per_sec": 0, 00:04:34.944 "rw_ios_per_sec": 0, 00:04:34.944 "rw_mbytes_per_sec": 0, 00:04:34.944 "w_mbytes_per_sec": 0 00:04:34.944 }, 00:04:34.944 "block_size": 512, 00:04:34.944 "claim_type": "exclusive_write", 00:04:34.944 "claimed": true, 00:04:34.944 "driver_specific": {}, 00:04:34.944 "memory_domains": [ 00:04:34.944 { 00:04:34.944 "dma_device_id": "system", 00:04:34.944 "dma_device_type": 1 00:04:34.944 }, 00:04:34.944 { 00:04:34.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.944 "dma_device_type": 2 00:04:34.944 } 00:04:34.944 ], 00:04:34.944 "name": "Malloc3", 00:04:34.944 "num_blocks": 16384, 00:04:34.944 "product_name": "Malloc disk", 00:04:34.944 "supported_io_types": { 00:04:34.944 "abort": true, 00:04:34.944 "compare": false, 00:04:34.944 "compare_and_write": false, 00:04:34.944 "copy": true, 00:04:34.944 "flush": true, 00:04:34.944 "get_zone_info": false, 00:04:34.944 "nvme_admin": false, 00:04:34.944 "nvme_io": false, 00:04:34.944 "nvme_io_md": false, 00:04:34.944 "nvme_iov_md": false, 00:04:34.944 "read": true, 00:04:34.944 "reset": true, 00:04:34.944 "seek_data": false, 00:04:34.944 "seek_hole": false, 00:04:34.944 "unmap": true, 00:04:34.944 "write": true, 00:04:34.944 "write_zeroes": true, 00:04:34.944 "zcopy": true, 00:04:34.945 "zone_append": false, 00:04:34.945 "zone_management": false 00:04:34.945 }, 00:04:34.945 "uuid": "ce4f423c-8e93-41d9-a963-a0147ff65315", 00:04:34.945 "zoned": false 00:04:34.945 }, 00:04:34.945 { 00:04:34.945 "aliases": [ 00:04:34.945 "2bdcf38e-bf1a-56de-88a5-621b23cead36" 00:04:34.945 ], 00:04:34.945 "assigned_rate_limits": { 00:04:34.945 "r_mbytes_per_sec": 0, 00:04:34.945 "rw_ios_per_sec": 0, 00:04:34.945 "rw_mbytes_per_sec": 0, 00:04:34.945 "w_mbytes_per_sec": 0 00:04:34.945 }, 00:04:34.945 "block_size": 512, 00:04:34.945 "claimed": false, 00:04:34.945 "driver_specific": { 00:04:34.945 "passthru": { 00:04:34.945 "base_bdev_name": "Malloc3", 00:04:34.945 "name": "Passthru0" 00:04:34.945 } 00:04:34.945 }, 00:04:34.945 "memory_domains": [ 00:04:34.945 { 00:04:34.945 "dma_device_id": "system", 00:04:34.945 "dma_device_type": 1 00:04:34.945 }, 00:04:34.945 { 00:04:34.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.945 "dma_device_type": 2 00:04:34.945 } 00:04:34.945 ], 00:04:34.945 "name": "Passthru0", 00:04:34.945 "num_blocks": 16384, 00:04:34.945 "product_name": "passthru", 00:04:34.945 "supported_io_types": { 00:04:34.945 "abort": true, 00:04:34.945 "compare": false, 00:04:34.945 "compare_and_write": false, 00:04:34.945 "copy": true, 00:04:34.945 "flush": true, 00:04:34.945 "get_zone_info": false, 00:04:34.945 "nvme_admin": false, 00:04:34.945 "nvme_io": false, 00:04:34.945 "nvme_io_md": false, 00:04:34.945 "nvme_iov_md": false, 00:04:34.945 "read": true, 00:04:34.945 "reset": true, 00:04:34.945 "seek_data": false, 00:04:34.945 "seek_hole": false, 00:04:34.945 "unmap": true, 00:04:34.945 "write": true, 00:04:34.945 "write_zeroes": true, 00:04:34.945 "zcopy": true, 00:04:34.945 "zone_append": false, 00:04:34.945 "zone_management": false 00:04:34.945 }, 00:04:34.945 "uuid": "2bdcf38e-bf1a-56de-88a5-621b23cead36", 00:04:34.945 "zoned": false 00:04:34.945 } 00:04:34.945 ]' 00:04:34.945 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.202 ************************************ 00:04:35.202 END TEST rpc_daemon_integrity 00:04:35.202 ************************************ 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.202 00:04:35.202 real 0m0.317s 00:04:35.202 user 0m0.195s 00:04:35.202 sys 0m0.054s 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.202 18:23:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.202 18:23:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:35.202 18:23:57 rpc -- rpc/rpc.sh@84 -- # killprocess 60380 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@948 -- # '[' -z 60380 ']' 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@952 -- # kill -0 60380 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@953 -- # uname 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60380 00:04:35.202 killing process with pid 60380 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60380' 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@967 -- # kill 60380 00:04:35.202 18:23:57 rpc -- common/autotest_common.sh@972 -- # wait 60380 00:04:35.460 00:04:35.460 real 0m2.954s 00:04:35.460 user 0m3.749s 00:04:35.460 sys 0m0.863s 00:04:35.460 18:23:58 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.460 18:23:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.460 ************************************ 00:04:35.460 END TEST rpc 00:04:35.460 ************************************ 00:04:35.718 18:23:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:35.718 18:23:58 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.718 18:23:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.718 18:23:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.718 18:23:58 -- common/autotest_common.sh@10 -- # set +x 00:04:35.718 ************************************ 00:04:35.718 START TEST skip_rpc 00:04:35.718 ************************************ 00:04:35.718 18:23:58 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.718 * Looking for test storage... 00:04:35.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.719 18:23:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.719 18:23:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.719 18:23:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:35.719 18:23:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.719 18:23:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.719 18:23:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.719 ************************************ 00:04:35.719 START TEST skip_rpc 00:04:35.719 ************************************ 00:04:35.719 18:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:35.719 18:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60640 00:04:35.719 18:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.719 18:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:35.719 18:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:35.719 [2024-07-15 18:23:58.318869] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:04:35.719 [2024-07-15 18:23:58.319081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:04:35.976 [2024-07-15 18:23:58.460596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.976 [2024-07-15 18:23:58.553343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.243 2024/07/15 18:24:03 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60640 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60640 ']' 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60640 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60640 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.243 killing process with pid 60640 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60640' 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60640 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60640 00:04:41.243 00:04:41.243 real 0m5.372s 00:04:41.243 user 0m5.045s 00:04:41.243 sys 0m0.241s 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.243 18:24:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.243 ************************************ 00:04:41.243 END TEST skip_rpc 00:04:41.243 ************************************ 00:04:41.243 18:24:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:41.243 18:24:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:41.243 18:24:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.243 18:24:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.243 18:24:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.243 ************************************ 00:04:41.243 START TEST skip_rpc_with_json 00:04:41.243 ************************************ 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60727 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60727 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 60727 ']' 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.243 18:24:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.243 [2024-07-15 18:24:03.763643] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:04:41.243 [2024-07-15 18:24:03.763715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60727 ] 00:04:41.502 [2024-07-15 18:24:03.906417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.502 [2024-07-15 18:24:03.988949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.069 [2024-07-15 18:24:04.613921] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.069 2024/07/15 18:24:04 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:42.069 request: 00:04:42.069 { 00:04:42.069 "method": "nvmf_get_transports", 00:04:42.069 "params": { 00:04:42.069 "trtype": "tcp" 00:04:42.069 } 00:04:42.069 } 00:04:42.069 Got JSON-RPC error response 00:04:42.069 GoRPCClient: error on JSON-RPC call 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.069 [2024-07-15 18:24:04.625979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.069 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.327 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.327 18:24:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.327 { 00:04:42.327 "subsystems": [ 00:04:42.327 { 00:04:42.327 "subsystem": "keyring", 00:04:42.327 "config": [] 00:04:42.327 }, 00:04:42.327 { 00:04:42.327 "subsystem": "iobuf", 00:04:42.327 "config": [ 00:04:42.327 { 00:04:42.327 "method": "iobuf_set_options", 00:04:42.327 "params": { 00:04:42.327 "large_bufsize": 135168, 00:04:42.327 "large_pool_count": 1024, 00:04:42.327 "small_bufsize": 8192, 00:04:42.327 "small_pool_count": 8192 00:04:42.327 } 00:04:42.327 } 00:04:42.327 ] 00:04:42.327 }, 00:04:42.327 { 00:04:42.327 "subsystem": "sock", 00:04:42.327 "config": [ 00:04:42.327 { 00:04:42.327 "method": "sock_set_default_impl", 00:04:42.327 "params": { 00:04:42.327 "impl_name": "posix" 00:04:42.327 } 00:04:42.327 }, 00:04:42.327 { 00:04:42.327 "method": "sock_impl_set_options", 00:04:42.327 "params": { 00:04:42.327 "enable_ktls": false, 00:04:42.327 "enable_placement_id": 0, 00:04:42.327 "enable_quickack": false, 00:04:42.327 "enable_recv_pipe": true, 00:04:42.327 "enable_zerocopy_send_client": false, 00:04:42.327 "enable_zerocopy_send_server": true, 00:04:42.327 "impl_name": "ssl", 00:04:42.327 "recv_buf_size": 4096, 00:04:42.327 "send_buf_size": 4096, 00:04:42.327 "tls_version": 0, 00:04:42.327 "zerocopy_threshold": 0 00:04:42.327 } 00:04:42.327 }, 00:04:42.327 { 00:04:42.327 "method": "sock_impl_set_options", 00:04:42.327 "params": { 00:04:42.327 "enable_ktls": false, 00:04:42.327 "enable_placement_id": 0, 00:04:42.327 "enable_quickack": false, 00:04:42.327 "enable_recv_pipe": true, 00:04:42.327 "enable_zerocopy_send_client": false, 00:04:42.327 "enable_zerocopy_send_server": true, 00:04:42.327 "impl_name": "posix", 00:04:42.327 "recv_buf_size": 2097152, 00:04:42.327 "send_buf_size": 2097152, 00:04:42.327 "tls_version": 0, 00:04:42.327 "zerocopy_threshold": 0 00:04:42.327 } 00:04:42.327 } 00:04:42.327 ] 00:04:42.327 }, 00:04:42.327 { 00:04:42.327 "subsystem": "vmd", 00:04:42.327 "config": [] 00:04:42.327 }, 00:04:42.327 { 00:04:42.327 "subsystem": "accel", 00:04:42.327 "config": [ 00:04:42.327 { 00:04:42.327 "method": "accel_set_options", 00:04:42.327 "params": { 00:04:42.327 "buf_count": 2048, 00:04:42.327 "large_cache_size": 16, 00:04:42.327 "sequence_count": 2048, 00:04:42.327 "small_cache_size": 128, 00:04:42.327 "task_count": 2048 00:04:42.327 } 00:04:42.327 } 00:04:42.327 ] 00:04:42.327 }, 00:04:42.327 { 00:04:42.327 "subsystem": "bdev", 00:04:42.327 "config": [ 00:04:42.327 { 00:04:42.328 "method": "bdev_set_options", 00:04:42.328 "params": { 00:04:42.328 "bdev_auto_examine": true, 00:04:42.328 "bdev_io_cache_size": 256, 00:04:42.328 "bdev_io_pool_size": 65535, 00:04:42.328 "iobuf_large_cache_size": 16, 00:04:42.328 "iobuf_small_cache_size": 128 00:04:42.328 } 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "method": "bdev_raid_set_options", 00:04:42.328 "params": { 00:04:42.328 "process_window_size_kb": 1024 00:04:42.328 } 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "method": "bdev_iscsi_set_options", 00:04:42.328 "params": { 00:04:42.328 "timeout_sec": 30 00:04:42.328 } 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "method": "bdev_nvme_set_options", 00:04:42.328 "params": { 00:04:42.328 "action_on_timeout": "none", 00:04:42.328 "allow_accel_sequence": false, 00:04:42.328 "arbitration_burst": 0, 00:04:42.328 "bdev_retry_count": 3, 00:04:42.328 "ctrlr_loss_timeout_sec": 0, 00:04:42.328 "delay_cmd_submit": true, 00:04:42.328 "dhchap_dhgroups": [ 00:04:42.328 "null", 00:04:42.328 "ffdhe2048", 00:04:42.328 "ffdhe3072", 00:04:42.328 "ffdhe4096", 00:04:42.328 "ffdhe6144", 00:04:42.328 "ffdhe8192" 00:04:42.328 ], 00:04:42.328 "dhchap_digests": [ 00:04:42.328 "sha256", 00:04:42.328 "sha384", 00:04:42.328 "sha512" 00:04:42.328 ], 00:04:42.328 "disable_auto_failback": false, 00:04:42.328 "fast_io_fail_timeout_sec": 0, 00:04:42.328 "generate_uuids": false, 00:04:42.328 "high_priority_weight": 0, 00:04:42.328 "io_path_stat": false, 00:04:42.328 "io_queue_requests": 0, 00:04:42.328 "keep_alive_timeout_ms": 10000, 00:04:42.328 "low_priority_weight": 0, 00:04:42.328 "medium_priority_weight": 0, 00:04:42.328 "nvme_adminq_poll_period_us": 10000, 00:04:42.328 "nvme_error_stat": false, 00:04:42.328 "nvme_ioq_poll_period_us": 0, 00:04:42.328 "rdma_cm_event_timeout_ms": 0, 00:04:42.328 "rdma_max_cq_size": 0, 00:04:42.328 "rdma_srq_size": 0, 00:04:42.328 "reconnect_delay_sec": 0, 00:04:42.328 "timeout_admin_us": 0, 00:04:42.328 "timeout_us": 0, 00:04:42.328 "transport_ack_timeout": 0, 00:04:42.328 "transport_retry_count": 4, 00:04:42.328 "transport_tos": 0 00:04:42.328 } 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "method": "bdev_nvme_set_hotplug", 00:04:42.328 "params": { 00:04:42.328 "enable": false, 00:04:42.328 "period_us": 100000 00:04:42.328 } 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "method": "bdev_wait_for_examine" 00:04:42.328 } 00:04:42.328 ] 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "subsystem": "scsi", 00:04:42.328 "config": null 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "subsystem": "scheduler", 00:04:42.328 "config": [ 00:04:42.328 { 00:04:42.328 "method": "framework_set_scheduler", 00:04:42.328 "params": { 00:04:42.328 "name": "static" 00:04:42.328 } 00:04:42.328 } 00:04:42.328 ] 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "subsystem": "vhost_scsi", 00:04:42.328 "config": [] 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "subsystem": "vhost_blk", 00:04:42.328 "config": [] 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "subsystem": "ublk", 00:04:42.328 "config": [] 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "subsystem": "nbd", 00:04:42.328 "config": [] 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "subsystem": "nvmf", 00:04:42.328 "config": [ 00:04:42.328 { 00:04:42.328 "method": "nvmf_set_config", 00:04:42.328 "params": { 00:04:42.328 "admin_cmd_passthru": { 00:04:42.328 "identify_ctrlr": false 00:04:42.328 }, 00:04:42.328 "discovery_filter": "match_any" 00:04:42.328 } 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "method": "nvmf_set_max_subsystems", 00:04:42.328 "params": { 00:04:42.328 "max_subsystems": 1024 00:04:42.328 } 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "method": "nvmf_set_crdt", 00:04:42.328 "params": { 00:04:42.328 "crdt1": 0, 00:04:42.328 "crdt2": 0, 00:04:42.328 "crdt3": 0 00:04:42.328 } 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "method": "nvmf_create_transport", 00:04:42.328 "params": { 00:04:42.328 "abort_timeout_sec": 1, 00:04:42.328 "ack_timeout": 0, 00:04:42.328 "buf_cache_size": 4294967295, 00:04:42.328 "c2h_success": true, 00:04:42.328 "data_wr_pool_size": 0, 00:04:42.328 "dif_insert_or_strip": false, 00:04:42.328 "in_capsule_data_size": 4096, 00:04:42.328 "io_unit_size": 131072, 00:04:42.328 "max_aq_depth": 128, 00:04:42.328 "max_io_qpairs_per_ctrlr": 127, 00:04:42.328 "max_io_size": 131072, 00:04:42.328 "max_queue_depth": 128, 00:04:42.328 "num_shared_buffers": 511, 00:04:42.328 "sock_priority": 0, 00:04:42.328 "trtype": "TCP", 00:04:42.328 "zcopy": false 00:04:42.328 } 00:04:42.328 } 00:04:42.328 ] 00:04:42.328 }, 00:04:42.328 { 00:04:42.328 "subsystem": "iscsi", 00:04:42.328 "config": [ 00:04:42.328 { 00:04:42.328 "method": "iscsi_set_options", 00:04:42.328 "params": { 00:04:42.328 "allow_duplicated_isid": false, 00:04:42.328 "chap_group": 0, 00:04:42.328 "data_out_pool_size": 2048, 00:04:42.328 "default_time2retain": 20, 00:04:42.328 "default_time2wait": 2, 00:04:42.328 "disable_chap": false, 00:04:42.328 "error_recovery_level": 0, 00:04:42.328 "first_burst_length": 8192, 00:04:42.328 "immediate_data": true, 00:04:42.328 "immediate_data_pool_size": 16384, 00:04:42.328 "max_connections_per_session": 2, 00:04:42.328 "max_large_datain_per_connection": 64, 00:04:42.328 "max_queue_depth": 64, 00:04:42.328 "max_r2t_per_connection": 4, 00:04:42.328 "max_sessions": 128, 00:04:42.328 "mutual_chap": false, 00:04:42.328 "node_base": "iqn.2016-06.io.spdk", 00:04:42.328 "nop_in_interval": 30, 00:04:42.328 "nop_timeout": 60, 00:04:42.328 "pdu_pool_size": 36864, 00:04:42.328 "require_chap": false 00:04:42.328 } 00:04:42.328 } 00:04:42.328 ] 00:04:42.328 } 00:04:42.328 ] 00:04:42.328 } 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60727 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 60727 ']' 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 60727 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60727 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.328 killing process with pid 60727 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60727' 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 60727 00:04:42.328 18:24:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 60727 00:04:42.587 18:24:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60766 00:04:42.587 18:24:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:42.587 18:24:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60766 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 60766 ']' 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 60766 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60766 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60766' 00:04:47.855 killing process with pid 60766 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 60766 00:04:47.855 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 60766 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.114 00:04:48.114 real 0m6.828s 00:04:48.114 user 0m6.512s 00:04:48.114 sys 0m0.611s 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.114 ************************************ 00:04:48.114 END TEST skip_rpc_with_json 00:04:48.114 ************************************ 00:04:48.114 18:24:10 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:48.114 18:24:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:48.114 18:24:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.114 18:24:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.114 18:24:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.114 ************************************ 00:04:48.114 START TEST skip_rpc_with_delay 00:04:48.114 ************************************ 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.114 [2024-07-15 18:24:10.668074] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:48.114 [2024-07-15 18:24:10.668194] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:48.114 00:04:48.114 real 0m0.079s 00:04:48.114 user 0m0.047s 00:04:48.114 sys 0m0.031s 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.114 18:24:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:48.114 ************************************ 00:04:48.114 END TEST skip_rpc_with_delay 00:04:48.114 ************************************ 00:04:48.373 18:24:10 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:48.373 18:24:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:48.373 18:24:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:48.373 18:24:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:48.373 18:24:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.373 18:24:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.373 18:24:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.373 ************************************ 00:04:48.373 START TEST exit_on_failed_rpc_init 00:04:48.373 ************************************ 00:04:48.373 18:24:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:48.373 18:24:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60876 00:04:48.373 18:24:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 60876 00:04:48.373 18:24:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.374 18:24:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 60876 ']' 00:04:48.374 18:24:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.374 18:24:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.374 18:24:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.374 18:24:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.374 18:24:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.374 [2024-07-15 18:24:10.818068] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:04:48.374 [2024-07-15 18:24:10.818146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60876 ] 00:04:48.374 [2024-07-15 18:24:10.946270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.632 [2024-07-15 18:24:11.039598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:49.200 18:24:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.200 [2024-07-15 18:24:11.752559] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:04:49.200 [2024-07-15 18:24:11.752678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60906 ] 00:04:49.459 [2024-07-15 18:24:11.894468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.459 [2024-07-15 18:24:11.977980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.459 [2024-07-15 18:24:11.978058] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:49.459 [2024-07-15 18:24:11.978071] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:49.459 [2024-07-15 18:24:11.978081] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 60876 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 60876 ']' 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 60876 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:49.459 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.718 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60876 00:04:49.718 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.718 killing process with pid 60876 00:04:49.718 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.718 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60876' 00:04:49.718 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 60876 00:04:49.718 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 60876 00:04:49.977 00:04:49.977 real 0m1.646s 00:04:49.977 user 0m1.856s 00:04:49.977 sys 0m0.389s 00:04:49.977 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.977 18:24:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.977 ************************************ 00:04:49.977 END TEST exit_on_failed_rpc_init 00:04:49.977 ************************************ 00:04:49.977 18:24:12 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:49.977 18:24:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.977 00:04:49.977 real 0m14.345s 00:04:49.977 user 0m13.591s 00:04:49.977 sys 0m1.562s 00:04:49.977 18:24:12 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.977 18:24:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.977 ************************************ 00:04:49.977 END TEST skip_rpc 00:04:49.977 ************************************ 00:04:49.977 18:24:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.977 18:24:12 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:49.977 18:24:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.977 18:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.977 18:24:12 -- common/autotest_common.sh@10 -- # set +x 00:04:49.977 ************************************ 00:04:49.977 START TEST rpc_client 00:04:49.977 ************************************ 00:04:49.977 18:24:12 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:50.269 * Looking for test storage... 00:04:50.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:50.269 18:24:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:50.269 OK 00:04:50.269 18:24:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.269 00:04:50.269 real 0m0.159s 00:04:50.269 user 0m0.067s 00:04:50.269 sys 0m0.101s 00:04:50.269 18:24:12 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.269 18:24:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:50.269 ************************************ 00:04:50.269 END TEST rpc_client 00:04:50.269 ************************************ 00:04:50.269 18:24:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.269 18:24:12 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.269 18:24:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.269 18:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.269 18:24:12 -- common/autotest_common.sh@10 -- # set +x 00:04:50.269 ************************************ 00:04:50.269 START TEST json_config 00:04:50.269 ************************************ 00:04:50.269 18:24:12 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.269 18:24:12 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.269 18:24:12 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.269 18:24:12 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.269 18:24:12 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.269 18:24:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.269 18:24:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.269 18:24:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.269 18:24:12 json_config -- paths/export.sh@5 -- # export PATH 00:04:50.269 18:24:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@47 -- # : 0 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:50.269 18:24:12 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:50.269 18:24:12 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:50.269 18:24:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:50.269 18:24:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:50.269 18:24:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:50.270 INFO: JSON configuration test init 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:50.270 18:24:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.270 18:24:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.270 18:24:12 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:50.270 18:24:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.270 18:24:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.543 Waiting for target to run... 00:04:50.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.543 18:24:12 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:50.543 18:24:12 json_config -- json_config/common.sh@9 -- # local app=target 00:04:50.543 18:24:12 json_config -- json_config/common.sh@10 -- # shift 00:04:50.543 18:24:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.543 18:24:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.543 18:24:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.543 18:24:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.543 18:24:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.543 18:24:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61024 00:04:50.543 18:24:12 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:50.543 18:24:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.543 18:24:12 json_config -- json_config/common.sh@25 -- # waitforlisten 61024 /var/tmp/spdk_tgt.sock 00:04:50.543 18:24:12 json_config -- common/autotest_common.sh@829 -- # '[' -z 61024 ']' 00:04:50.543 18:24:12 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.543 18:24:12 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.543 18:24:12 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.543 18:24:12 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.543 18:24:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.543 [2024-07-15 18:24:12.935102] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:04:50.543 [2024-07-15 18:24:12.935385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61024 ] 00:04:50.803 [2024-07-15 18:24:13.306414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.803 [2024-07-15 18:24:13.384829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.370 18:24:13 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.370 18:24:13 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:51.370 18:24:13 json_config -- json_config/common.sh@26 -- # echo '' 00:04:51.370 00:04:51.370 18:24:13 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:51.370 18:24:13 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:51.370 18:24:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.370 18:24:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.370 18:24:13 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:51.370 18:24:13 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:51.370 18:24:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.370 18:24:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.370 18:24:13 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:51.370 18:24:13 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:51.370 18:24:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:51.937 18:24:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.937 18:24:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:51.937 18:24:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:51.937 18:24:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.937 18:24:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:51.937 18:24:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.937 18:24:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:51.937 18:24:14 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:51.937 18:24:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.196 MallocForNvmf0 00:04:52.196 18:24:14 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.196 18:24:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.455 MallocForNvmf1 00:04:52.455 18:24:14 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:52.455 18:24:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:52.714 [2024-07-15 18:24:15.133545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.714 18:24:15 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:52.714 18:24:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:52.971 18:24:15 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:52.971 18:24:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.229 18:24:15 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.229 18:24:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.229 18:24:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.229 18:24:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.488 [2024-07-15 18:24:15.960684] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:53.488 18:24:15 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:53.488 18:24:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.488 18:24:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.488 18:24:16 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:53.488 18:24:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.488 18:24:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.488 18:24:16 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:53.488 18:24:16 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.488 18:24:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.747 MallocBdevForConfigChangeCheck 00:04:53.747 18:24:16 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:53.747 18:24:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.747 18:24:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.747 18:24:16 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:53.747 18:24:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.314 INFO: shutting down applications... 00:04:54.314 18:24:16 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:54.315 18:24:16 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:54.315 18:24:16 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:54.315 18:24:16 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:54.315 18:24:16 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:54.572 Calling clear_iscsi_subsystem 00:04:54.572 Calling clear_nvmf_subsystem 00:04:54.572 Calling clear_nbd_subsystem 00:04:54.572 Calling clear_ublk_subsystem 00:04:54.572 Calling clear_vhost_blk_subsystem 00:04:54.572 Calling clear_vhost_scsi_subsystem 00:04:54.572 Calling clear_bdev_subsystem 00:04:54.572 18:24:16 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:54.572 18:24:16 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:54.572 18:24:16 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:54.572 18:24:16 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.572 18:24:16 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:54.572 18:24:16 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:54.830 18:24:17 json_config -- json_config/json_config.sh@345 -- # break 00:04:54.830 18:24:17 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:54.830 18:24:17 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:54.830 18:24:17 json_config -- json_config/common.sh@31 -- # local app=target 00:04:54.830 18:24:17 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:54.830 18:24:17 json_config -- json_config/common.sh@35 -- # [[ -n 61024 ]] 00:04:54.830 18:24:17 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61024 00:04:54.830 18:24:17 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:54.830 18:24:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.830 18:24:17 json_config -- json_config/common.sh@41 -- # kill -0 61024 00:04:54.830 18:24:17 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.465 18:24:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.465 18:24:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.465 18:24:17 json_config -- json_config/common.sh@41 -- # kill -0 61024 00:04:55.465 18:24:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:55.465 18:24:17 json_config -- json_config/common.sh@43 -- # break 00:04:55.465 18:24:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:55.465 SPDK target shutdown done 00:04:55.465 18:24:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:55.465 INFO: relaunching applications... 00:04:55.465 18:24:17 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:55.465 18:24:17 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:55.465 18:24:17 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.465 18:24:17 json_config -- json_config/common.sh@10 -- # shift 00:04:55.465 18:24:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.465 18:24:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.465 18:24:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.465 18:24:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.465 18:24:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.465 18:24:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61288 00:04:55.465 Waiting for target to run... 00:04:55.465 18:24:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.465 18:24:17 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:55.465 18:24:17 json_config -- json_config/common.sh@25 -- # waitforlisten 61288 /var/tmp/spdk_tgt.sock 00:04:55.465 18:24:17 json_config -- common/autotest_common.sh@829 -- # '[' -z 61288 ']' 00:04:55.465 18:24:17 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.465 18:24:17 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.465 18:24:17 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.465 18:24:17 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.465 18:24:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.465 [2024-07-15 18:24:17.900145] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:04:55.465 [2024-07-15 18:24:17.900216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61288 ] 00:04:55.724 [2024-07-15 18:24:18.262458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.983 [2024-07-15 18:24:18.340859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.241 [2024-07-15 18:24:18.657039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.242 [2024-07-15 18:24:18.689036] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:56.242 18:24:18 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.242 18:24:18 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:56.242 00:04:56.242 18:24:18 json_config -- json_config/common.sh@26 -- # echo '' 00:04:56.242 18:24:18 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:56.242 INFO: Checking if target configuration is the same... 00:04:56.242 18:24:18 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:56.242 18:24:18 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:56.242 18:24:18 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:56.242 18:24:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.242 + '[' 2 -ne 2 ']' 00:04:56.242 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:56.242 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:56.242 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:56.242 +++ basename /dev/fd/62 00:04:56.242 ++ mktemp /tmp/62.XXX 00:04:56.242 + tmp_file_1=/tmp/62.tTM 00:04:56.242 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:56.242 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:56.242 + tmp_file_2=/tmp/spdk_tgt_config.json.DYn 00:04:56.242 + ret=0 00:04:56.242 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:56.501 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:56.760 + diff -u /tmp/62.tTM /tmp/spdk_tgt_config.json.DYn 00:04:56.760 INFO: JSON config files are the same 00:04:56.760 + echo 'INFO: JSON config files are the same' 00:04:56.760 + rm /tmp/62.tTM /tmp/spdk_tgt_config.json.DYn 00:04:56.760 + exit 0 00:04:56.760 18:24:19 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:56.760 18:24:19 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:56.760 INFO: changing configuration and checking if this can be detected... 00:04:56.760 18:24:19 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:56.760 18:24:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:56.760 18:24:19 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:56.760 18:24:19 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:56.760 18:24:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.760 + '[' 2 -ne 2 ']' 00:04:56.760 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:56.760 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:56.760 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:56.760 +++ basename /dev/fd/62 00:04:56.760 ++ mktemp /tmp/62.XXX 00:04:56.760 + tmp_file_1=/tmp/62.rst 00:04:57.019 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:57.019 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:57.019 + tmp_file_2=/tmp/spdk_tgt_config.json.Jeh 00:04:57.019 + ret=0 00:04:57.019 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:57.278 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:57.278 + diff -u /tmp/62.rst /tmp/spdk_tgt_config.json.Jeh 00:04:57.278 + ret=1 00:04:57.278 + echo '=== Start of file: /tmp/62.rst ===' 00:04:57.278 + cat /tmp/62.rst 00:04:57.278 + echo '=== End of file: /tmp/62.rst ===' 00:04:57.278 + echo '' 00:04:57.278 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Jeh ===' 00:04:57.278 + cat /tmp/spdk_tgt_config.json.Jeh 00:04:57.278 + echo '=== End of file: /tmp/spdk_tgt_config.json.Jeh ===' 00:04:57.278 + echo '' 00:04:57.278 + rm /tmp/62.rst /tmp/spdk_tgt_config.json.Jeh 00:04:57.278 + exit 1 00:04:57.278 INFO: configuration change detected. 00:04:57.278 18:24:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:57.278 18:24:19 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:57.278 18:24:19 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@317 -- # [[ -n 61288 ]] 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.279 18:24:19 json_config -- json_config/json_config.sh@323 -- # killprocess 61288 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@948 -- # '[' -z 61288 ']' 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@952 -- # kill -0 61288 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@953 -- # uname 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61288 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.279 killing process with pid 61288 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61288' 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@967 -- # kill 61288 00:04:57.279 18:24:19 json_config -- common/autotest_common.sh@972 -- # wait 61288 00:04:57.537 18:24:20 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:57.537 18:24:20 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:57.537 18:24:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.537 18:24:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.537 18:24:20 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:57.537 INFO: Success 00:04:57.537 18:24:20 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:57.537 00:04:57.537 real 0m7.380s 00:04:57.537 user 0m9.895s 00:04:57.537 sys 0m2.005s 00:04:57.537 18:24:20 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.537 18:24:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.537 ************************************ 00:04:57.537 END TEST json_config 00:04:57.537 ************************************ 00:04:57.796 18:24:20 -- common/autotest_common.sh@1142 -- # return 0 00:04:57.796 18:24:20 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:57.796 18:24:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.796 18:24:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.796 18:24:20 -- common/autotest_common.sh@10 -- # set +x 00:04:57.796 ************************************ 00:04:57.796 START TEST json_config_extra_key 00:04:57.796 ************************************ 00:04:57.796 18:24:20 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:57.796 18:24:20 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.796 18:24:20 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.796 18:24:20 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.796 18:24:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.796 18:24:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.796 18:24:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.796 18:24:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:57.796 18:24:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:57.796 18:24:20 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:57.796 INFO: launching applications... 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:57.796 18:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61458 00:04:57.796 Waiting for target to run... 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61458 /var/tmp/spdk_tgt.sock 00:04:57.796 18:24:20 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61458 ']' 00:04:57.796 18:24:20 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:57.796 18:24:20 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:57.796 18:24:20 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.796 18:24:20 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:57.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:57.796 18:24:20 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.796 18:24:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:57.796 [2024-07-15 18:24:20.370921] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:04:57.796 [2024-07-15 18:24:20.371000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61458 ] 00:04:58.402 [2024-07-15 18:24:20.725837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.402 [2024-07-15 18:24:20.803904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.669 18:24:21 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.669 18:24:21 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:58.669 00:04:58.669 18:24:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:58.669 INFO: shutting down applications... 00:04:58.669 18:24:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:58.669 18:24:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:58.669 18:24:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:58.669 18:24:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:58.669 18:24:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61458 ]] 00:04:58.669 18:24:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61458 00:04:58.669 18:24:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:58.669 18:24:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.669 18:24:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61458 00:04:58.669 18:24:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.236 18:24:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.236 18:24:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.236 18:24:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61458 00:04:59.236 18:24:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:59.236 18:24:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:59.236 18:24:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:59.236 SPDK target shutdown done 00:04:59.236 18:24:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:59.236 Success 00:04:59.236 18:24:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:59.236 00:04:59.236 real 0m1.540s 00:04:59.236 user 0m1.286s 00:04:59.236 sys 0m0.395s 00:04:59.236 18:24:21 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.236 18:24:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.236 ************************************ 00:04:59.236 END TEST json_config_extra_key 00:04:59.236 ************************************ 00:04:59.236 18:24:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.236 18:24:21 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.236 18:24:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.236 18:24:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.236 18:24:21 -- common/autotest_common.sh@10 -- # set +x 00:04:59.236 ************************************ 00:04:59.236 START TEST alias_rpc 00:04:59.236 ************************************ 00:04:59.236 18:24:21 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.494 * Looking for test storage... 00:04:59.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:59.495 18:24:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:59.495 18:24:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61535 00:04:59.495 18:24:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.495 18:24:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61535 00:04:59.495 18:24:21 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61535 ']' 00:04:59.495 18:24:21 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.495 18:24:21 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.495 18:24:21 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.495 18:24:21 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.495 18:24:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.495 [2024-07-15 18:24:21.954378] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:04:59.495 [2024-07-15 18:24:21.954459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61535 ] 00:04:59.495 [2024-07-15 18:24:22.094532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.753 [2024-07-15 18:24:22.192112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.320 18:24:22 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.320 18:24:22 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:00.320 18:24:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:00.579 18:24:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61535 00:05:00.579 18:24:23 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61535 ']' 00:05:00.579 18:24:23 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61535 00:05:00.579 18:24:23 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:00.579 18:24:23 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.579 18:24:23 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61535 00:05:00.579 18:24:23 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.579 18:24:23 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.579 18:24:23 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61535' 00:05:00.579 killing process with pid 61535 00:05:00.579 18:24:23 alias_rpc -- common/autotest_common.sh@967 -- # kill 61535 00:05:00.579 18:24:23 alias_rpc -- common/autotest_common.sh@972 -- # wait 61535 00:05:00.836 00:05:00.836 real 0m1.651s 00:05:00.837 user 0m1.808s 00:05:00.837 sys 0m0.430s 00:05:00.837 18:24:23 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.837 18:24:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.837 ************************************ 00:05:00.837 END TEST alias_rpc 00:05:00.837 ************************************ 00:05:01.094 18:24:23 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.094 18:24:23 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:05:01.094 18:24:23 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.094 18:24:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.094 18:24:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.094 18:24:23 -- common/autotest_common.sh@10 -- # set +x 00:05:01.094 ************************************ 00:05:01.094 START TEST dpdk_mem_utility 00:05:01.094 ************************************ 00:05:01.094 18:24:23 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.094 * Looking for test storage... 00:05:01.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:01.094 18:24:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:01.094 18:24:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61621 00:05:01.094 18:24:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.094 18:24:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61621 00:05:01.094 18:24:23 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61621 ']' 00:05:01.094 18:24:23 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.094 18:24:23 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.094 18:24:23 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.094 18:24:23 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.094 18:24:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.094 [2024-07-15 18:24:23.671054] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:01.094 [2024-07-15 18:24:23.671139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61621 ] 00:05:01.352 [2024-07-15 18:24:23.812432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.352 [2024-07-15 18:24:23.897962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.919 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.919 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:01.919 18:24:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:01.919 18:24:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:01.919 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.919 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.178 { 00:05:02.178 "filename": "/tmp/spdk_mem_dump.txt" 00:05:02.178 } 00:05:02.178 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.178 18:24:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:02.178 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:02.178 1 heaps totaling size 814.000000 MiB 00:05:02.179 size: 814.000000 MiB heap id: 0 00:05:02.179 end heaps---------- 00:05:02.179 8 mempools totaling size 598.116089 MiB 00:05:02.179 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:02.179 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:02.179 size: 84.521057 MiB name: bdev_io_61621 00:05:02.179 size: 51.011292 MiB name: evtpool_61621 00:05:02.179 size: 50.003479 MiB name: msgpool_61621 00:05:02.179 size: 21.763794 MiB name: PDU_Pool 00:05:02.179 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:02.179 size: 0.026123 MiB name: Session_Pool 00:05:02.179 end mempools------- 00:05:02.179 6 memzones totaling size 4.142822 MiB 00:05:02.179 size: 1.000366 MiB name: RG_ring_0_61621 00:05:02.179 size: 1.000366 MiB name: RG_ring_1_61621 00:05:02.179 size: 1.000366 MiB name: RG_ring_4_61621 00:05:02.179 size: 1.000366 MiB name: RG_ring_5_61621 00:05:02.179 size: 0.125366 MiB name: RG_ring_2_61621 00:05:02.179 size: 0.015991 MiB name: RG_ring_3_61621 00:05:02.179 end memzones------- 00:05:02.179 18:24:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:02.179 heap id: 0 total size: 814.000000 MiB number of busy elements: 241 number of free elements: 15 00:05:02.179 list of free elements. size: 12.482727 MiB 00:05:02.179 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:02.179 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:02.179 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:02.179 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:02.179 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:02.179 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:02.179 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:02.179 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:02.179 element at address: 0x200000200000 with size: 0.836853 MiB 00:05:02.179 element at address: 0x20001aa00000 with size: 0.570251 MiB 00:05:02.179 element at address: 0x20000b200000 with size: 0.489258 MiB 00:05:02.179 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:02.179 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:02.179 element at address: 0x200027e00000 with size: 0.397949 MiB 00:05:02.179 element at address: 0x200003a00000 with size: 0.350769 MiB 00:05:02.179 list of standard malloc elements. size: 199.254700 MiB 00:05:02.179 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:02.179 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:02.179 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:02.179 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:02.179 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:02.179 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:02.179 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:02.179 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:02.179 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:02.179 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:02.179 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:02.180 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:02.180 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:02.180 list of memzone associated elements. size: 602.262573 MiB 00:05:02.180 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:02.180 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:02.180 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:02.180 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:02.180 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:02.180 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61621_0 00:05:02.180 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:02.180 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61621_0 00:05:02.180 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:02.180 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61621_0 00:05:02.180 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:02.180 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:02.180 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:02.180 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:02.180 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:02.180 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61621 00:05:02.180 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:02.180 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61621 00:05:02.180 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:02.180 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61621 00:05:02.180 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:02.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:02.180 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:02.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:02.180 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:02.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:02.180 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:02.180 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:02.180 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:02.180 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61621 00:05:02.180 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:02.180 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61621 00:05:02.180 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:02.180 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61621 00:05:02.180 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:02.180 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61621 00:05:02.180 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:02.180 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61621 00:05:02.180 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:02.180 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:02.180 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:02.180 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:02.180 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:02.180 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:02.180 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:02.180 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61621 00:05:02.180 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:02.180 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:02.181 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:05:02.181 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:02.181 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:02.181 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61621 00:05:02.181 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:05:02.181 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:02.181 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:02.181 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61621 00:05:02.181 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:02.181 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61621 00:05:02.181 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:05:02.181 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:02.181 18:24:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:02.181 18:24:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61621 00:05:02.181 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61621 ']' 00:05:02.181 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61621 00:05:02.181 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:02.181 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.181 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61621 00:05:02.181 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.181 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.181 killing process with pid 61621 00:05:02.181 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61621' 00:05:02.181 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61621 00:05:02.181 18:24:24 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61621 00:05:02.439 00:05:02.439 real 0m1.512s 00:05:02.439 user 0m1.567s 00:05:02.439 sys 0m0.395s 00:05:02.439 18:24:25 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.439 18:24:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.439 ************************************ 00:05:02.439 END TEST dpdk_mem_utility 00:05:02.439 ************************************ 00:05:02.698 18:24:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.698 18:24:25 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:02.698 18:24:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.698 18:24:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.698 18:24:25 -- common/autotest_common.sh@10 -- # set +x 00:05:02.698 ************************************ 00:05:02.698 START TEST event 00:05:02.698 ************************************ 00:05:02.698 18:24:25 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:02.698 * Looking for test storage... 00:05:02.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:02.698 18:24:25 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:02.698 18:24:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:02.698 18:24:25 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.698 18:24:25 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:02.698 18:24:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.698 18:24:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.698 ************************************ 00:05:02.698 START TEST event_perf 00:05:02.698 ************************************ 00:05:02.698 18:24:25 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.698 Running I/O for 1 seconds...[2024-07-15 18:24:25.245460] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:02.698 [2024-07-15 18:24:25.245827] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61711 ] 00:05:02.956 [2024-07-15 18:24:25.388373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.956 [2024-07-15 18:24:25.474351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.956 [2024-07-15 18:24:25.474537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.956 [2024-07-15 18:24:25.475113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.956 [2024-07-15 18:24:25.475123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.332 Running I/O for 1 seconds... 00:05:04.332 lcore 0: 193065 00:05:04.332 lcore 1: 193065 00:05:04.332 lcore 2: 193066 00:05:04.332 lcore 3: 193066 00:05:04.332 done. 00:05:04.332 00:05:04.332 real 0m1.331s 00:05:04.332 user 0m4.135s 00:05:04.332 sys 0m0.070s 00:05:04.332 18:24:26 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.332 18:24:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.332 ************************************ 00:05:04.332 END TEST event_perf 00:05:04.332 ************************************ 00:05:04.332 18:24:26 event -- common/autotest_common.sh@1142 -- # return 0 00:05:04.332 18:24:26 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:04.332 18:24:26 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:04.332 18:24:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.332 18:24:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.332 ************************************ 00:05:04.332 START TEST event_reactor 00:05:04.332 ************************************ 00:05:04.332 18:24:26 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:04.332 [2024-07-15 18:24:26.653718] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:04.332 [2024-07-15 18:24:26.654060] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61749 ] 00:05:04.332 [2024-07-15 18:24:26.798783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.332 [2024-07-15 18:24:26.893921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.781 test_start 00:05:05.781 oneshot 00:05:05.781 tick 100 00:05:05.781 tick 100 00:05:05.781 tick 250 00:05:05.781 tick 100 00:05:05.781 tick 100 00:05:05.781 tick 100 00:05:05.781 tick 250 00:05:05.781 tick 500 00:05:05.781 tick 100 00:05:05.781 tick 100 00:05:05.781 tick 250 00:05:05.781 tick 100 00:05:05.781 tick 100 00:05:05.781 test_end 00:05:05.781 00:05:05.781 real 0m1.339s 00:05:05.781 user 0m1.177s 00:05:05.781 sys 0m0.056s 00:05:05.781 18:24:27 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.781 18:24:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:05.781 ************************************ 00:05:05.781 END TEST event_reactor 00:05:05.781 ************************************ 00:05:05.781 18:24:28 event -- common/autotest_common.sh@1142 -- # return 0 00:05:05.781 18:24:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.781 18:24:28 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:05.781 18:24:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.781 18:24:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.781 ************************************ 00:05:05.781 START TEST event_reactor_perf 00:05:05.781 ************************************ 00:05:05.781 18:24:28 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.781 [2024-07-15 18:24:28.059835] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:05.781 [2024-07-15 18:24:28.060135] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:05:05.781 [2024-07-15 18:24:28.220588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.781 [2024-07-15 18:24:28.314675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.150 test_start 00:05:07.150 test_end 00:05:07.150 Performance: 480352 events per second 00:05:07.150 00:05:07.150 real 0m1.352s 00:05:07.150 user 0m1.171s 00:05:07.150 sys 0m0.074s 00:05:07.150 18:24:29 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.150 18:24:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.150 ************************************ 00:05:07.150 END TEST event_reactor_perf 00:05:07.150 ************************************ 00:05:07.150 18:24:29 event -- common/autotest_common.sh@1142 -- # return 0 00:05:07.150 18:24:29 event -- event/event.sh@49 -- # uname -s 00:05:07.150 18:24:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:07.150 18:24:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:07.150 18:24:29 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.150 18:24:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.150 18:24:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.150 ************************************ 00:05:07.150 START TEST event_scheduler 00:05:07.150 ************************************ 00:05:07.150 18:24:29 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:07.150 * Looking for test storage... 00:05:07.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:07.150 18:24:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:07.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.150 18:24:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61846 00:05:07.150 18:24:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.150 18:24:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61846 00:05:07.150 18:24:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:07.150 18:24:29 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 61846 ']' 00:05:07.150 18:24:29 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.150 18:24:29 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.150 18:24:29 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.150 18:24:29 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.150 18:24:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.150 [2024-07-15 18:24:29.653836] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:07.150 [2024-07-15 18:24:29.654115] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61846 ] 00:05:07.407 [2024-07-15 18:24:29.786138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:07.407 [2024-07-15 18:24:29.882145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.407 [2024-07-15 18:24:29.882329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.407 [2024-07-15 18:24:29.882526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.407 [2024-07-15 18:24:29.882528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.970 18:24:30 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.970 18:24:30 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:07.970 18:24:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:07.970 18:24:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.970 18:24:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.970 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.970 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.970 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.970 POWER: Cannot set governor of lcore 0 to performance 00:05:07.970 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.970 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.970 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.970 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.970 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:07.970 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:07.970 POWER: Unable to set Power Management Environment for lcore 0 00:05:07.970 [2024-07-15 18:24:30.544589] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:07.970 [2024-07-15 18:24:30.544603] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:07.970 [2024-07-15 18:24:30.544613] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:07.970 [2024-07-15 18:24:30.544639] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:07.970 [2024-07-15 18:24:30.544649] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:07.970 [2024-07-15 18:24:30.544657] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:07.970 18:24:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.970 18:24:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:07.970 18:24:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.970 18:24:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.228 [2024-07-15 18:24:30.623914] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:08.228 18:24:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.228 18:24:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:08.228 18:24:30 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.228 18:24:30 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.228 18:24:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.228 ************************************ 00:05:08.228 START TEST scheduler_create_thread 00:05:08.228 ************************************ 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.228 2 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.228 3 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.228 4 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.228 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.228 5 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.229 6 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.229 7 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.229 8 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.229 9 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.229 10 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.229 18:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.604 18:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.604 18:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:09.604 18:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:09.604 18:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.604 18:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.976 18:24:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.976 00:05:10.976 real 0m2.608s 00:05:10.976 user 0m0.023s 00:05:10.976 sys 0m0.012s 00:05:10.976 ************************************ 00:05:10.976 END TEST scheduler_create_thread 00:05:10.976 18:24:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.976 18:24:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.976 ************************************ 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:10.976 18:24:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:10.976 18:24:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61846 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 61846 ']' 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 61846 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61846 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61846' 00:05:10.976 killing process with pid 61846 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 61846 00:05:10.976 18:24:33 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 61846 00:05:11.233 [2024-07-15 18:24:33.726324] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:11.491 00:05:11.491 real 0m4.467s 00:05:11.491 user 0m8.189s 00:05:11.491 sys 0m0.398s 00:05:11.491 18:24:33 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.491 ************************************ 00:05:11.491 END TEST event_scheduler 00:05:11.491 18:24:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.491 ************************************ 00:05:11.491 18:24:33 event -- common/autotest_common.sh@1142 -- # return 0 00:05:11.491 18:24:33 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:11.491 18:24:33 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:11.491 18:24:33 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.491 18:24:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.491 18:24:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.491 ************************************ 00:05:11.491 START TEST app_repeat 00:05:11.491 ************************************ 00:05:11.491 18:24:34 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61964 00:05:11.491 Process app_repeat pid: 61964 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61964' 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.491 spdk_app_start Round 0 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:11.491 18:24:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61964 /var/tmp/spdk-nbd.sock 00:05:11.491 18:24:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61964 ']' 00:05:11.491 18:24:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.491 18:24:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.491 18:24:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.491 18:24:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.491 18:24:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.491 [2024-07-15 18:24:34.049730] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:11.491 [2024-07-15 18:24:34.049815] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61964 ] 00:05:11.748 [2024-07-15 18:24:34.190501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.748 [2024-07-15 18:24:34.291780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.748 [2024-07-15 18:24:34.291781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.315 18:24:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.315 18:24:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:12.315 18:24:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.574 Malloc0 00:05:12.574 18:24:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.832 Malloc1 00:05:12.832 18:24:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.832 18:24:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.091 /dev/nbd0 00:05:13.091 18:24:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.091 18:24:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.091 1+0 records in 00:05:13.091 1+0 records out 00:05:13.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339864 s, 12.1 MB/s 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.091 18:24:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:13.091 18:24:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.091 18:24:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.091 18:24:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.349 /dev/nbd1 00:05:13.350 18:24:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.350 18:24:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.350 1+0 records in 00:05:13.350 1+0 records out 00:05:13.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025204 s, 16.3 MB/s 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.350 18:24:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:13.350 18:24:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.350 18:24:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.350 18:24:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.350 18:24:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.350 18:24:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.608 { 00:05:13.608 "bdev_name": "Malloc0", 00:05:13.608 "nbd_device": "/dev/nbd0" 00:05:13.608 }, 00:05:13.608 { 00:05:13.608 "bdev_name": "Malloc1", 00:05:13.608 "nbd_device": "/dev/nbd1" 00:05:13.608 } 00:05:13.608 ]' 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.608 { 00:05:13.608 "bdev_name": "Malloc0", 00:05:13.608 "nbd_device": "/dev/nbd0" 00:05:13.608 }, 00:05:13.608 { 00:05:13.608 "bdev_name": "Malloc1", 00:05:13.608 "nbd_device": "/dev/nbd1" 00:05:13.608 } 00:05:13.608 ]' 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.608 /dev/nbd1' 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.608 /dev/nbd1' 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.608 256+0 records in 00:05:13.608 256+0 records out 00:05:13.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129076 s, 81.2 MB/s 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.608 256+0 records in 00:05:13.608 256+0 records out 00:05:13.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259418 s, 40.4 MB/s 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.608 256+0 records in 00:05:13.608 256+0 records out 00:05:13.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278625 s, 37.6 MB/s 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.608 18:24:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.867 18:24:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.867 18:24:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.867 18:24:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.867 18:24:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.867 18:24:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.867 18:24:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.867 18:24:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.867 18:24:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.867 18:24:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.867 18:24:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.125 18:24:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.383 18:24:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.383 18:24:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.642 18:24:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.900 [2024-07-15 18:24:37.308955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.900 [2024-07-15 18:24:37.405026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.900 [2024-07-15 18:24:37.405027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.900 [2024-07-15 18:24:37.446492] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.900 [2024-07-15 18:24:37.446538] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.207 18:24:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.207 spdk_app_start Round 1 00:05:18.207 18:24:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:18.207 18:24:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61964 /var/tmp/spdk-nbd.sock 00:05:18.207 18:24:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61964 ']' 00:05:18.207 18:24:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.207 18:24:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.207 18:24:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.207 18:24:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.207 18:24:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.207 18:24:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.207 18:24:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:18.207 18:24:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.207 Malloc0 00:05:18.207 18:24:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.207 Malloc1 00:05:18.465 18:24:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.465 18:24:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.465 /dev/nbd0 00:05:18.725 18:24:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.725 18:24:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.725 1+0 records in 00:05:18.725 1+0 records out 00:05:18.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319745 s, 12.8 MB/s 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:18.725 18:24:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:18.725 18:24:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.725 18:24:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.725 18:24:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.725 /dev/nbd1 00:05:18.985 18:24:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.985 18:24:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.985 1+0 records in 00:05:18.985 1+0 records out 00:05:18.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027332 s, 15.0 MB/s 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:18.985 18:24:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:18.985 18:24:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.985 18:24:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.985 18:24:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.985 18:24:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.985 18:24:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.985 18:24:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.985 { 00:05:18.985 "bdev_name": "Malloc0", 00:05:18.985 "nbd_device": "/dev/nbd0" 00:05:18.985 }, 00:05:18.985 { 00:05:18.985 "bdev_name": "Malloc1", 00:05:18.985 "nbd_device": "/dev/nbd1" 00:05:18.985 } 00:05:18.985 ]' 00:05:18.985 18:24:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.985 { 00:05:18.985 "bdev_name": "Malloc0", 00:05:18.985 "nbd_device": "/dev/nbd0" 00:05:18.985 }, 00:05:18.985 { 00:05:18.985 "bdev_name": "Malloc1", 00:05:18.985 "nbd_device": "/dev/nbd1" 00:05:18.985 } 00:05:18.985 ]' 00:05:18.985 18:24:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.245 /dev/nbd1' 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.245 /dev/nbd1' 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.245 256+0 records in 00:05:19.245 256+0 records out 00:05:19.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118938 s, 88.2 MB/s 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.245 256+0 records in 00:05:19.245 256+0 records out 00:05:19.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028075 s, 37.3 MB/s 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.245 256+0 records in 00:05:19.245 256+0 records out 00:05:19.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269528 s, 38.9 MB/s 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.245 18:24:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.505 18:24:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.505 18:24:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.505 18:24:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.505 18:24:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.505 18:24:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.505 18:24:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.505 18:24:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.505 18:24:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.505 18:24:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.505 18:24:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.765 18:24:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.024 18:24:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.024 18:24:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.024 18:24:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.024 18:24:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.024 18:24:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.024 18:24:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.024 18:24:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.024 18:24:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.024 18:24:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.024 18:24:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.283 18:24:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.283 [2024-07-15 18:24:42.820154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.542 [2024-07-15 18:24:42.907886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.542 [2024-07-15 18:24:42.907888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.542 [2024-07-15 18:24:42.949955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.542 [2024-07-15 18:24:42.950001] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.074 spdk_app_start Round 2 00:05:23.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.074 18:24:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.074 18:24:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:23.074 18:24:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61964 /var/tmp/spdk-nbd.sock 00:05:23.074 18:24:45 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61964 ']' 00:05:23.074 18:24:45 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.074 18:24:45 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.074 18:24:45 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.074 18:24:45 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.074 18:24:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.332 18:24:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.332 18:24:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:23.332 18:24:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.591 Malloc0 00:05:23.591 18:24:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.849 Malloc1 00:05:23.850 18:24:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.850 18:24:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.108 /dev/nbd0 00:05:24.108 18:24:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.108 18:24:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.108 18:24:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:24.108 18:24:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.108 18:24:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.108 18:24:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.108 18:24:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:24.108 18:24:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.108 18:24:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.108 18:24:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.108 18:24:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.108 1+0 records in 00:05:24.108 1+0 records out 00:05:24.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322104 s, 12.7 MB/s 00:05:24.109 18:24:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.109 18:24:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.109 18:24:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.109 18:24:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.109 18:24:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.109 18:24:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.109 18:24:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.109 18:24:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.367 /dev/nbd1 00:05:24.367 18:24:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.367 18:24:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.367 1+0 records in 00:05:24.367 1+0 records out 00:05:24.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328808 s, 12.5 MB/s 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.367 18:24:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.367 18:24:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.367 18:24:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.367 18:24:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.367 18:24:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.367 18:24:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.626 { 00:05:24.626 "bdev_name": "Malloc0", 00:05:24.626 "nbd_device": "/dev/nbd0" 00:05:24.626 }, 00:05:24.626 { 00:05:24.626 "bdev_name": "Malloc1", 00:05:24.626 "nbd_device": "/dev/nbd1" 00:05:24.626 } 00:05:24.626 ]' 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.626 { 00:05:24.626 "bdev_name": "Malloc0", 00:05:24.626 "nbd_device": "/dev/nbd0" 00:05:24.626 }, 00:05:24.626 { 00:05:24.626 "bdev_name": "Malloc1", 00:05:24.626 "nbd_device": "/dev/nbd1" 00:05:24.626 } 00:05:24.626 ]' 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.626 /dev/nbd1' 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.626 /dev/nbd1' 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.626 256+0 records in 00:05:24.626 256+0 records out 00:05:24.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00556338 s, 188 MB/s 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.626 256+0 records in 00:05:24.626 256+0 records out 00:05:24.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285306 s, 36.8 MB/s 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.626 18:24:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.884 256+0 records in 00:05:24.884 256+0 records out 00:05:24.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291446 s, 36.0 MB/s 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.884 18:24:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.142 18:24:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.142 18:24:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.142 18:24:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.142 18:24:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.142 18:24:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.142 18:24:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.142 18:24:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.142 18:24:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.142 18:24:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.142 18:24:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.399 18:24:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.657 18:24:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.657 18:24:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.657 18:24:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.657 18:24:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.657 18:24:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.657 18:24:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.657 18:24:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.657 18:24:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.657 18:24:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.657 18:24:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.915 18:24:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.915 [2024-07-15 18:24:48.432685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.173 [2024-07-15 18:24:48.528506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.173 [2024-07-15 18:24:48.528508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.173 [2024-07-15 18:24:48.569968] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.173 [2024-07-15 18:24:48.570016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.733 18:24:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61964 /var/tmp/spdk-nbd.sock 00:05:28.733 18:24:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61964 ']' 00:05:28.733 18:24:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.733 18:24:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.733 18:24:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.733 18:24:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.733 18:24:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:28.992 18:24:51 event.app_repeat -- event/event.sh@39 -- # killprocess 61964 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 61964 ']' 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 61964 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61964 00:05:28.992 killing process with pid 61964 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61964' 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@967 -- # kill 61964 00:05:28.992 18:24:51 event.app_repeat -- common/autotest_common.sh@972 -- # wait 61964 00:05:29.251 spdk_app_start is called in Round 0. 00:05:29.251 Shutdown signal received, stop current app iteration 00:05:29.251 Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 reinitialization... 00:05:29.251 spdk_app_start is called in Round 1. 00:05:29.251 Shutdown signal received, stop current app iteration 00:05:29.251 Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 reinitialization... 00:05:29.251 spdk_app_start is called in Round 2. 00:05:29.251 Shutdown signal received, stop current app iteration 00:05:29.251 Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 reinitialization... 00:05:29.251 spdk_app_start is called in Round 3. 00:05:29.251 Shutdown signal received, stop current app iteration 00:05:29.251 18:24:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:29.251 18:24:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:29.251 00:05:29.251 real 0m17.702s 00:05:29.251 user 0m38.390s 00:05:29.251 sys 0m3.348s 00:05:29.251 18:24:51 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.251 18:24:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.251 ************************************ 00:05:29.251 END TEST app_repeat 00:05:29.251 ************************************ 00:05:29.251 18:24:51 event -- common/autotest_common.sh@1142 -- # return 0 00:05:29.251 18:24:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:29.251 18:24:51 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:29.251 18:24:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.251 18:24:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.251 18:24:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.251 ************************************ 00:05:29.251 START TEST cpu_locks 00:05:29.251 ************************************ 00:05:29.251 18:24:51 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:29.510 * Looking for test storage... 00:05:29.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:29.511 18:24:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:29.511 18:24:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:29.511 18:24:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:29.511 18:24:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:29.511 18:24:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.511 18:24:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.511 18:24:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.511 ************************************ 00:05:29.511 START TEST default_locks 00:05:29.511 ************************************ 00:05:29.511 18:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:29.511 18:24:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62572 00:05:29.511 18:24:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.511 18:24:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62572 00:05:29.511 18:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62572 ']' 00:05:29.511 18:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.511 18:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.511 18:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.511 18:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.511 18:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.511 [2024-07-15 18:24:51.978301] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:29.511 [2024-07-15 18:24:51.978377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62572 ] 00:05:29.511 [2024-07-15 18:24:52.121106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.770 [2024-07-15 18:24:52.216535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.339 18:24:52 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.339 18:24:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:30.339 18:24:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62572 00:05:30.339 18:24:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62572 00:05:30.339 18:24:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62572 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62572 ']' 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62572 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62572 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.904 killing process with pid 62572 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62572' 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62572 00:05:30.904 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62572 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62572 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62572 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62572 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62572 ']' 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.162 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62572) - No such process 00:05:31.162 ERROR: process (pid: 62572) is no longer running 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.162 00:05:31.162 real 0m1.664s 00:05:31.162 user 0m1.709s 00:05:31.162 sys 0m0.530s 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.162 18:24:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.162 ************************************ 00:05:31.162 END TEST default_locks 00:05:31.162 ************************************ 00:05:31.162 18:24:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:31.162 18:24:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:31.162 18:24:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.162 18:24:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.162 18:24:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.162 ************************************ 00:05:31.162 START TEST default_locks_via_rpc 00:05:31.162 ************************************ 00:05:31.162 18:24:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:31.162 18:24:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62631 00:05:31.162 18:24:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62631 00:05:31.162 18:24:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.162 18:24:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62631 ']' 00:05:31.162 18:24:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.162 18:24:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.162 18:24:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.162 18:24:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.162 18:24:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.162 [2024-07-15 18:24:53.709228] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:31.162 [2024-07-15 18:24:53.709324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62631 ] 00:05:31.419 [2024-07-15 18:24:53.850626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.419 [2024-07-15 18:24:53.935286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62631 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62631 00:05:31.985 18:24:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.556 18:24:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62631 00:05:32.556 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62631 ']' 00:05:32.556 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62631 00:05:32.556 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:32.556 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.556 18:24:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62631 00:05:32.556 18:24:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.556 18:24:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.556 killing process with pid 62631 00:05:32.556 18:24:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62631' 00:05:32.556 18:24:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62631 00:05:32.556 18:24:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62631 00:05:32.814 00:05:32.814 real 0m1.678s 00:05:32.814 user 0m1.753s 00:05:32.814 sys 0m0.522s 00:05:32.814 18:24:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.814 18:24:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.814 ************************************ 00:05:32.814 END TEST default_locks_via_rpc 00:05:32.814 ************************************ 00:05:32.814 18:24:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:32.814 18:24:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:32.814 18:24:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.814 18:24:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.814 18:24:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.814 ************************************ 00:05:32.814 START TEST non_locking_app_on_locked_coremask 00:05:32.814 ************************************ 00:05:32.814 18:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:32.814 18:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62694 00:05:32.814 18:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62694 /var/tmp/spdk.sock 00:05:32.814 18:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.814 18:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62694 ']' 00:05:32.814 18:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.814 18:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.814 18:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.814 18:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.814 18:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.071 [2024-07-15 18:24:55.457300] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:33.071 [2024-07-15 18:24:55.457370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62694 ] 00:05:33.071 [2024-07-15 18:24:55.599072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.329 [2024-07-15 18:24:55.694297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.893 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.894 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:33.894 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:33.894 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62722 00:05:33.894 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62722 /var/tmp/spdk2.sock 00:05:33.894 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62722 ']' 00:05:33.894 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.894 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.894 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.894 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.894 18:24:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.894 [2024-07-15 18:24:56.380737] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:33.894 [2024-07-15 18:24:56.380804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62722 ] 00:05:34.151 [2024-07-15 18:24:56.517046] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.151 [2024-07-15 18:24:56.517084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.151 [2024-07-15 18:24:56.711309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.716 18:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.716 18:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:34.716 18:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62694 00:05:34.716 18:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62694 00:05:34.716 18:24:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62694 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62694 ']' 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62694 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62694 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.646 killing process with pid 62694 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62694' 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62694 00:05:35.646 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62694 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62722 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62722 ']' 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62722 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62722 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.214 killing process with pid 62722 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62722' 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62722 00:05:36.214 18:24:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62722 00:05:36.472 00:05:36.472 real 0m3.667s 00:05:36.472 user 0m4.021s 00:05:36.472 sys 0m1.041s 00:05:36.472 18:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.472 18:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.472 ************************************ 00:05:36.472 END TEST non_locking_app_on_locked_coremask 00:05:36.472 ************************************ 00:05:36.732 18:24:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:36.732 18:24:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:36.732 18:24:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.732 18:24:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.732 18:24:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.732 ************************************ 00:05:36.732 START TEST locking_app_on_unlocked_coremask 00:05:36.732 ************************************ 00:05:36.732 18:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:36.732 18:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:36.732 18:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62801 00:05:36.732 18:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62801 /var/tmp/spdk.sock 00:05:36.732 18:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62801 ']' 00:05:36.732 18:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.732 18:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.732 18:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.732 18:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.732 18:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.732 [2024-07-15 18:24:59.192032] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:36.732 [2024-07-15 18:24:59.192134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62801 ] 00:05:36.732 [2024-07-15 18:24:59.339822] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.732 [2024-07-15 18:24:59.339893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.990 [2024-07-15 18:24:59.430317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.556 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.556 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:37.556 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.556 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62824 00:05:37.557 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62824 /var/tmp/spdk2.sock 00:05:37.557 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62824 ']' 00:05:37.557 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.557 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.557 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.557 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.557 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.557 [2024-07-15 18:25:00.082076] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:37.557 [2024-07-15 18:25:00.082147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62824 ] 00:05:37.814 [2024-07-15 18:25:00.218124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.814 [2024-07-15 18:25:00.411485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.379 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.379 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:38.379 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62824 00:05:38.379 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62824 00:05:38.379 18:25:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62801 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62801 ']' 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 62801 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62801 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.750 killing process with pid 62801 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62801' 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 62801 00:05:39.750 18:25:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 62801 00:05:40.031 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62824 00:05:40.031 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62824 ']' 00:05:40.031 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 62824 00:05:40.031 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:40.031 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.031 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62824 00:05:40.288 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.288 killing process with pid 62824 00:05:40.288 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.288 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62824' 00:05:40.288 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 62824 00:05:40.288 18:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 62824 00:05:40.547 00:05:40.547 real 0m3.876s 00:05:40.547 user 0m4.207s 00:05:40.547 sys 0m1.092s 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.547 ************************************ 00:05:40.547 END TEST locking_app_on_unlocked_coremask 00:05:40.547 ************************************ 00:05:40.547 18:25:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:40.547 18:25:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:40.547 18:25:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.547 18:25:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.547 18:25:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.547 ************************************ 00:05:40.547 START TEST locking_app_on_locked_coremask 00:05:40.547 ************************************ 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62903 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62903 /var/tmp/spdk.sock 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62903 ']' 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.547 18:25:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.547 [2024-07-15 18:25:03.142888] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:40.547 [2024-07-15 18:25:03.142965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62903 ] 00:05:40.804 [2024-07-15 18:25:03.269392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.804 [2024-07-15 18:25:03.367628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62931 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62931 /var/tmp/spdk2.sock 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62931 /var/tmp/spdk2.sock 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62931 /var/tmp/spdk2.sock 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62931 ']' 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.736 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.736 [2024-07-15 18:25:04.059284] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:41.736 [2024-07-15 18:25:04.059353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62931 ] 00:05:41.736 [2024-07-15 18:25:04.195054] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62903 has claimed it. 00:05:41.736 [2024-07-15 18:25:04.195113] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:42.301 ERROR: process (pid: 62931) is no longer running 00:05:42.301 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62931) - No such process 00:05:42.301 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.301 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:42.301 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:42.301 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.301 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.301 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.301 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62903 00:05:42.301 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62903 00:05:42.301 18:25:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.558 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62903 00:05:42.558 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62903 ']' 00:05:42.558 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62903 00:05:42.558 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:42.558 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.558 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62903 00:05:42.814 killing process with pid 62903 00:05:42.814 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.814 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.814 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62903' 00:05:42.814 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62903 00:05:42.814 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62903 00:05:43.071 00:05:43.071 real 0m2.421s 00:05:43.071 user 0m2.722s 00:05:43.071 sys 0m0.590s 00:05:43.071 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.071 ************************************ 00:05:43.071 END TEST locking_app_on_locked_coremask 00:05:43.071 ************************************ 00:05:43.071 18:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.071 18:25:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.071 18:25:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:43.071 18:25:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.071 18:25:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.071 18:25:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.071 ************************************ 00:05:43.071 START TEST locking_overlapped_coremask 00:05:43.071 ************************************ 00:05:43.071 18:25:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:43.071 18:25:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62982 00:05:43.071 18:25:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:43.071 18:25:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62982 /var/tmp/spdk.sock 00:05:43.071 18:25:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62982 ']' 00:05:43.071 18:25:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.071 18:25:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.071 18:25:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.071 18:25:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.071 18:25:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.071 [2024-07-15 18:25:05.617171] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:43.071 [2024-07-15 18:25:05.617257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62982 ] 00:05:43.328 [2024-07-15 18:25:05.757750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.328 [2024-07-15 18:25:05.848388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.328 [2024-07-15 18:25:05.848581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.328 [2024-07-15 18:25:05.848605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63007 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63007 /var/tmp/spdk2.sock 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63007 /var/tmp/spdk2.sock 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:43.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63007 /var/tmp/spdk2.sock 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63007 ']' 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.893 18:25:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.151 [2024-07-15 18:25:06.515473] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:44.151 [2024-07-15 18:25:06.515532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63007 ] 00:05:44.151 [2024-07-15 18:25:06.655286] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62982 has claimed it. 00:05:44.151 [2024-07-15 18:25:06.655342] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:44.714 ERROR: process (pid: 63007) is no longer running 00:05:44.714 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63007) - No such process 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62982 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 62982 ']' 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 62982 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62982 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62982' 00:05:44.714 killing process with pid 62982 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 62982 00:05:44.714 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 62982 00:05:44.970 ************************************ 00:05:44.970 END TEST locking_overlapped_coremask 00:05:44.970 ************************************ 00:05:44.970 00:05:44.970 real 0m1.976s 00:05:44.970 user 0m5.318s 00:05:44.970 sys 0m0.432s 00:05:44.970 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.970 18:25:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.227 18:25:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:45.227 18:25:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:45.227 18:25:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.227 18:25:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.227 18:25:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.227 ************************************ 00:05:45.227 START TEST locking_overlapped_coremask_via_rpc 00:05:45.227 ************************************ 00:05:45.227 18:25:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:45.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.227 18:25:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63058 00:05:45.227 18:25:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63058 /var/tmp/spdk.sock 00:05:45.227 18:25:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63058 ']' 00:05:45.227 18:25:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.227 18:25:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:45.227 18:25:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.227 18:25:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.227 18:25:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.227 18:25:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.227 [2024-07-15 18:25:07.671054] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:45.227 [2024-07-15 18:25:07.671126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63058 ] 00:05:45.227 [2024-07-15 18:25:07.815443] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.227 [2024-07-15 18:25:07.815503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.484 [2024-07-15 18:25:07.900825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.484 [2024-07-15 18:25:07.901043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.484 [2024-07-15 18:25:07.901045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63083 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63083 /var/tmp/spdk2.sock 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63083 ']' 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.052 18:25:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.052 [2024-07-15 18:25:08.562376] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:46.052 [2024-07-15 18:25:08.562801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63083 ] 00:05:46.310 [2024-07-15 18:25:08.697399] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.310 [2024-07-15 18:25:08.697559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.310 [2024-07-15 18:25:08.886631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.310 [2024-07-15 18:25:08.886681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.310 [2024-07-15 18:25:08.886684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.878 [2024-07-15 18:25:09.431699] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63058 has claimed it. 00:05:46.878 2024/07/15 18:25:09 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:46.878 request: 00:05:46.878 { 00:05:46.878 "method": "framework_enable_cpumask_locks", 00:05:46.878 "params": {} 00:05:46.878 } 00:05:46.878 Got JSON-RPC error response 00:05:46.878 GoRPCClient: error on JSON-RPC call 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63058 /var/tmp/spdk.sock 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63058 ']' 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.878 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.137 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.137 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:47.137 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63083 /var/tmp/spdk2.sock 00:05:47.137 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63083 ']' 00:05:47.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.137 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.137 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.137 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.137 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.137 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.397 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.397 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:47.397 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:47.397 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.397 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.397 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.397 00:05:47.397 real 0m2.252s 00:05:47.397 user 0m0.950s 00:05:47.397 sys 0m0.243s 00:05:47.397 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.397 18:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.397 ************************************ 00:05:47.397 END TEST locking_overlapped_coremask_via_rpc 00:05:47.397 ************************************ 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:47.397 18:25:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:47.397 18:25:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63058 ]] 00:05:47.397 18:25:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63058 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63058 ']' 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63058 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63058 00:05:47.397 killing process with pid 63058 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63058' 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63058 00:05:47.397 18:25:09 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63058 00:05:47.965 18:25:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63083 ]] 00:05:47.965 18:25:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63083 00:05:47.965 18:25:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63083 ']' 00:05:47.965 18:25:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63083 00:05:47.965 18:25:10 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:47.965 18:25:10 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.965 18:25:10 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63083 00:05:47.965 18:25:10 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:47.965 18:25:10 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:47.965 18:25:10 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63083' 00:05:47.965 killing process with pid 63083 00:05:47.965 18:25:10 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63083 00:05:47.965 18:25:10 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63083 00:05:48.531 18:25:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.531 18:25:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:48.531 18:25:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63058 ]] 00:05:48.531 18:25:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63058 00:05:48.531 18:25:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63058 ']' 00:05:48.531 18:25:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63058 00:05:48.531 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63058) - No such process 00:05:48.531 Process with pid 63058 is not found 00:05:48.531 18:25:10 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63058 is not found' 00:05:48.531 18:25:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63083 ]] 00:05:48.531 18:25:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63083 00:05:48.531 18:25:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63083 ']' 00:05:48.531 18:25:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63083 00:05:48.531 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63083) - No such process 00:05:48.531 Process with pid 63083 is not found 00:05:48.531 18:25:10 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63083 is not found' 00:05:48.531 18:25:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.531 00:05:48.531 real 0m19.114s 00:05:48.531 user 0m32.537s 00:05:48.531 sys 0m5.334s 00:05:48.531 18:25:10 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.531 18:25:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.531 ************************************ 00:05:48.531 END TEST cpu_locks 00:05:48.531 ************************************ 00:05:48.531 18:25:10 event -- common/autotest_common.sh@1142 -- # return 0 00:05:48.531 00:05:48.531 real 0m45.874s 00:05:48.531 user 1m25.805s 00:05:48.532 sys 0m9.632s 00:05:48.532 18:25:10 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.532 18:25:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.532 ************************************ 00:05:48.532 END TEST event 00:05:48.532 ************************************ 00:05:48.532 18:25:11 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.532 18:25:11 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:48.532 18:25:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.532 18:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.532 18:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.532 ************************************ 00:05:48.532 START TEST thread 00:05:48.532 ************************************ 00:05:48.532 18:25:11 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:48.532 * Looking for test storage... 00:05:48.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:48.532 18:25:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:48.532 18:25:11 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:48.532 18:25:11 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.532 18:25:11 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.790 ************************************ 00:05:48.790 START TEST thread_poller_perf 00:05:48.790 ************************************ 00:05:48.790 18:25:11 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:48.790 [2024-07-15 18:25:11.178794] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:48.790 [2024-07-15 18:25:11.178884] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63236 ] 00:05:48.790 [2024-07-15 18:25:11.316501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.047 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:49.047 [2024-07-15 18:25:11.444278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.981 ====================================== 00:05:49.981 busy:2497686574 (cyc) 00:05:49.981 total_run_count: 393000 00:05:49.981 tsc_hz: 2490000000 (cyc) 00:05:49.981 ====================================== 00:05:49.981 poller_cost: 6355 (cyc), 2552 (nsec) 00:05:49.981 00:05:49.981 real 0m1.408s 00:05:49.981 user 0m1.231s 00:05:49.981 sys 0m0.070s 00:05:49.981 18:25:12 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.981 18:25:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.981 ************************************ 00:05:49.981 END TEST thread_poller_perf 00:05:49.981 ************************************ 00:05:50.239 18:25:12 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:50.239 18:25:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.239 18:25:12 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:50.239 18:25:12 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.239 18:25:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.239 ************************************ 00:05:50.239 START TEST thread_poller_perf 00:05:50.239 ************************************ 00:05:50.239 18:25:12 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.239 [2024-07-15 18:25:12.663196] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:50.239 [2024-07-15 18:25:12.663331] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63266 ] 00:05:50.239 [2024-07-15 18:25:12.806610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.496 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:50.496 [2024-07-15 18:25:12.930246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.429 ====================================== 00:05:51.429 busy:2492095366 (cyc) 00:05:51.429 total_run_count: 5125000 00:05:51.429 tsc_hz: 2490000000 (cyc) 00:05:51.429 ====================================== 00:05:51.429 poller_cost: 486 (cyc), 195 (nsec) 00:05:51.429 00:05:51.429 real 0m1.403s 00:05:51.429 user 0m1.221s 00:05:51.429 sys 0m0.074s 00:05:51.429 18:25:14 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.688 18:25:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.688 ************************************ 00:05:51.688 END TEST thread_poller_perf 00:05:51.688 ************************************ 00:05:51.688 18:25:14 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:51.688 18:25:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:51.688 00:05:51.688 real 0m3.078s 00:05:51.688 user 0m2.552s 00:05:51.688 sys 0m0.311s 00:05:51.688 18:25:14 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.688 18:25:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.688 ************************************ 00:05:51.688 END TEST thread 00:05:51.688 ************************************ 00:05:51.688 18:25:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.688 18:25:14 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:51.688 18:25:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.688 18:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.688 18:25:14 -- common/autotest_common.sh@10 -- # set +x 00:05:51.688 ************************************ 00:05:51.688 START TEST accel 00:05:51.688 ************************************ 00:05:51.688 18:25:14 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:51.688 * Looking for test storage... 00:05:51.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:51.688 18:25:14 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:51.688 18:25:14 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:51.688 18:25:14 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:51.688 18:25:14 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63346 00:05:51.688 18:25:14 accel -- accel/accel.sh@63 -- # waitforlisten 63346 00:05:51.688 18:25:14 accel -- common/autotest_common.sh@829 -- # '[' -z 63346 ']' 00:05:51.688 18:25:14 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.688 18:25:14 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.688 18:25:14 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:51.688 18:25:14 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.688 18:25:14 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:51.688 18:25:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.688 18:25:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.688 18:25:14 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.688 18:25:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.688 18:25:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.688 18:25:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.688 18:25:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.688 18:25:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:51.688 18:25:14 accel -- accel/accel.sh@41 -- # jq -r . 00:05:51.946 [2024-07-15 18:25:14.352226] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:51.946 [2024-07-15 18:25:14.352297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63346 ] 00:05:51.946 [2024-07-15 18:25:14.479206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.205 [2024-07-15 18:25:14.573687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@862 -- # return 0 00:05:52.771 18:25:15 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:52.771 18:25:15 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:52.771 18:25:15 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:52.771 18:25:15 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:52.771 18:25:15 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:52.771 18:25:15 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.771 18:25:15 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.771 18:25:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.771 18:25:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.771 18:25:15 accel -- accel/accel.sh@75 -- # killprocess 63346 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@948 -- # '[' -z 63346 ']' 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@952 -- # kill -0 63346 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@953 -- # uname 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63346 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.771 killing process with pid 63346 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63346' 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@967 -- # kill 63346 00:05:52.771 18:25:15 accel -- common/autotest_common.sh@972 -- # wait 63346 00:05:53.338 18:25:15 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:53.338 18:25:15 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:53.338 18:25:15 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:53.338 18:25:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.338 18:25:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.338 18:25:15 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:53.338 18:25:15 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:53.338 18:25:15 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:53.338 18:25:15 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.338 18:25:15 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.338 18:25:15 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.338 18:25:15 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.338 18:25:15 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.338 18:25:15 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:53.338 18:25:15 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:53.338 18:25:15 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.338 18:25:15 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:53.338 18:25:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.338 18:25:15 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:53.338 18:25:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:53.338 18:25:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.338 18:25:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.338 ************************************ 00:05:53.338 START TEST accel_missing_filename 00:05:53.338 ************************************ 00:05:53.338 18:25:15 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:53.338 18:25:15 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:53.338 18:25:15 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:53.338 18:25:15 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:53.338 18:25:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.338 18:25:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:53.338 18:25:15 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.338 18:25:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:53.338 18:25:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:53.338 18:25:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:53.338 18:25:15 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.338 18:25:15 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.338 18:25:15 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.338 18:25:15 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.338 18:25:15 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.338 18:25:15 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:53.338 18:25:15 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:53.338 [2024-07-15 18:25:15.799845] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:53.338 [2024-07-15 18:25:15.799918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63410 ] 00:05:53.338 [2024-07-15 18:25:15.944880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.599 [2024-07-15 18:25:16.041464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.599 [2024-07-15 18:25:16.083644] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.599 [2024-07-15 18:25:16.143153] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:53.857 A filename is required. 00:05:53.857 18:25:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:53.857 18:25:16 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.857 18:25:16 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:53.857 18:25:16 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:53.857 18:25:16 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:53.857 18:25:16 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.857 00:05:53.857 real 0m0.455s 00:05:53.857 user 0m0.282s 00:05:53.857 sys 0m0.112s 00:05:53.857 18:25:16 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.857 18:25:16 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:53.857 ************************************ 00:05:53.857 END TEST accel_missing_filename 00:05:53.857 ************************************ 00:05:53.857 18:25:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.857 18:25:16 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.857 18:25:16 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:53.857 18:25:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.857 18:25:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.857 ************************************ 00:05:53.857 START TEST accel_compress_verify 00:05:53.857 ************************************ 00:05:53.857 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.857 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:53.857 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.857 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:53.857 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.857 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:53.857 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.857 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.857 18:25:16 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.857 18:25:16 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:53.857 18:25:16 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.857 18:25:16 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.857 18:25:16 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.858 18:25:16 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.858 18:25:16 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.858 18:25:16 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:53.858 18:25:16 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:53.858 [2024-07-15 18:25:16.314786] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:53.858 [2024-07-15 18:25:16.314903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63440 ] 00:05:53.858 [2024-07-15 18:25:16.458857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.116 [2024-07-15 18:25:16.553005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.116 [2024-07-15 18:25:16.595195] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.116 [2024-07-15 18:25:16.654604] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:54.116 00:05:54.116 Compression does not support the verify option, aborting. 00:05:54.374 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:54.374 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.374 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:54.374 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:54.374 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:54.374 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.374 00:05:54.374 real 0m0.452s 00:05:54.374 user 0m0.276s 00:05:54.374 sys 0m0.111s 00:05:54.374 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.374 18:25:16 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:54.374 ************************************ 00:05:54.374 END TEST accel_compress_verify 00:05:54.374 ************************************ 00:05:54.374 18:25:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.375 18:25:16 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:54.375 18:25:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:54.375 18:25:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.375 18:25:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.375 ************************************ 00:05:54.375 START TEST accel_wrong_workload 00:05:54.375 ************************************ 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:54.375 18:25:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:54.375 18:25:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:54.375 18:25:16 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.375 18:25:16 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.375 18:25:16 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.375 18:25:16 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.375 18:25:16 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.375 18:25:16 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:54.375 18:25:16 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:54.375 Unsupported workload type: foobar 00:05:54.375 [2024-07-15 18:25:16.833603] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:54.375 accel_perf options: 00:05:54.375 [-h help message] 00:05:54.375 [-q queue depth per core] 00:05:54.375 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:54.375 [-T number of threads per core 00:05:54.375 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:54.375 [-t time in seconds] 00:05:54.375 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:54.375 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:54.375 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:54.375 [-l for compress/decompress workloads, name of uncompressed input file 00:05:54.375 [-S for crc32c workload, use this seed value (default 0) 00:05:54.375 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:54.375 [-f for fill workload, use this BYTE value (default 255) 00:05:54.375 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:54.375 [-y verify result if this switch is on] 00:05:54.375 [-a tasks to allocate per core (default: same value as -q)] 00:05:54.375 Can be used to spread operations across a wider range of memory. 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.375 00:05:54.375 real 0m0.041s 00:05:54.375 user 0m0.021s 00:05:54.375 sys 0m0.019s 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.375 18:25:16 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:54.375 ************************************ 00:05:54.375 END TEST accel_wrong_workload 00:05:54.375 ************************************ 00:05:54.375 18:25:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.375 18:25:16 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:54.375 18:25:16 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:54.375 18:25:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.375 18:25:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.375 ************************************ 00:05:54.375 START TEST accel_negative_buffers 00:05:54.375 ************************************ 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:54.375 18:25:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:54.375 18:25:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:54.375 18:25:16 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.375 18:25:16 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.375 18:25:16 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.375 18:25:16 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.375 18:25:16 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.375 18:25:16 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:54.375 18:25:16 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:54.375 -x option must be non-negative. 00:05:54.375 [2024-07-15 18:25:16.938485] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:54.375 accel_perf options: 00:05:54.375 [-h help message] 00:05:54.375 [-q queue depth per core] 00:05:54.375 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:54.375 [-T number of threads per core 00:05:54.375 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:54.375 [-t time in seconds] 00:05:54.375 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:54.375 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:54.375 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:54.375 [-l for compress/decompress workloads, name of uncompressed input file 00:05:54.375 [-S for crc32c workload, use this seed value (default 0) 00:05:54.375 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:54.375 [-f for fill workload, use this BYTE value (default 255) 00:05:54.375 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:54.375 [-y verify result if this switch is on] 00:05:54.375 [-a tasks to allocate per core (default: same value as -q)] 00:05:54.375 Can be used to spread operations across a wider range of memory. 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.375 00:05:54.375 real 0m0.040s 00:05:54.375 user 0m0.019s 00:05:54.375 sys 0m0.020s 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.375 18:25:16 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:54.375 ************************************ 00:05:54.375 END TEST accel_negative_buffers 00:05:54.375 ************************************ 00:05:54.633 18:25:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.633 18:25:16 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:54.633 18:25:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:54.633 18:25:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.633 18:25:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.633 ************************************ 00:05:54.633 START TEST accel_crc32c 00:05:54.633 ************************************ 00:05:54.633 18:25:17 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:54.633 18:25:17 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:54.633 [2024-07-15 18:25:17.039047] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:54.633 [2024-07-15 18:25:17.039129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63493 ] 00:05:54.633 [2024-07-15 18:25:17.179911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.892 [2024-07-15 18:25:17.263462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.892 18:25:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.893 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.893 18:25:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.828 18:25:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.828 18:25:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.828 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.828 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.828 18:25:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.828 18:25:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.828 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.828 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:56.087 18:25:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.087 00:05:56.087 real 0m1.437s 00:05:56.087 user 0m1.242s 00:05:56.087 sys 0m0.109s 00:05:56.087 18:25:18 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.087 ************************************ 00:05:56.087 END TEST accel_crc32c 00:05:56.087 ************************************ 00:05:56.087 18:25:18 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:56.087 18:25:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.087 18:25:18 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:56.087 18:25:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:56.087 18:25:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.087 18:25:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.087 ************************************ 00:05:56.087 START TEST accel_crc32c_C2 00:05:56.087 ************************************ 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:56.087 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:56.087 [2024-07-15 18:25:18.541417] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:56.087 [2024-07-15 18:25:18.541499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63533 ] 00:05:56.087 [2024-07-15 18:25:18.684005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.345 [2024-07-15 18:25:18.767024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.345 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.346 18:25:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.748 00:05:57.748 real 0m1.460s 00:05:57.748 user 0m1.263s 00:05:57.748 sys 0m0.108s 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.748 18:25:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:57.748 ************************************ 00:05:57.748 END TEST accel_crc32c_C2 00:05:57.748 ************************************ 00:05:57.748 18:25:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.748 18:25:20 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:57.748 18:25:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:57.748 18:25:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.748 18:25:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.748 ************************************ 00:05:57.748 START TEST accel_copy 00:05:57.748 ************************************ 00:05:57.748 18:25:20 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:57.748 [2024-07-15 18:25:20.058829] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:57.748 [2024-07-15 18:25:20.058920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63562 ] 00:05:57.748 [2024-07-15 18:25:20.197952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.748 [2024-07-15 18:25:20.294800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.748 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.066 18:25:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.001 18:25:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.001 18:25:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.001 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:59.002 18:25:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.002 00:05:59.002 real 0m1.445s 00:05:59.002 user 0m1.254s 00:05:59.002 sys 0m0.103s 00:05:59.002 18:25:21 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.002 18:25:21 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:59.002 ************************************ 00:05:59.002 END TEST accel_copy 00:05:59.002 ************************************ 00:05:59.002 18:25:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.002 18:25:21 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:59.002 18:25:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:59.002 18:25:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.002 18:25:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.002 ************************************ 00:05:59.002 START TEST accel_fill 00:05:59.002 ************************************ 00:05:59.002 18:25:21 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:59.002 18:25:21 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:59.002 [2024-07-15 18:25:21.559885] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:05:59.002 [2024-07-15 18:25:21.559970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63602 ] 00:05:59.260 [2024-07-15 18:25:21.700402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.260 [2024-07-15 18:25:21.782923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.260 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.261 18:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:00.636 18:25:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.636 00:06:00.636 real 0m1.433s 00:06:00.636 user 0m1.243s 00:06:00.636 sys 0m0.102s 00:06:00.636 18:25:22 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.636 18:25:22 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:00.636 ************************************ 00:06:00.636 END TEST accel_fill 00:06:00.636 ************************************ 00:06:00.636 18:25:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.636 18:25:23 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:00.636 18:25:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:00.636 18:25:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.636 18:25:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.636 ************************************ 00:06:00.636 START TEST accel_copy_crc32c 00:06:00.636 ************************************ 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:00.636 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:00.636 [2024-07-15 18:25:23.064732] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:00.636 [2024-07-15 18:25:23.064817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63631 ] 00:06:00.636 [2024-07-15 18:25:23.195642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.893 [2024-07-15 18:25:23.287915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.893 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.894 18:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.267 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.268 00:06:02.268 real 0m1.432s 00:06:02.268 user 0m1.243s 00:06:02.268 sys 0m0.106s 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.268 18:25:24 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:02.268 ************************************ 00:06:02.268 END TEST accel_copy_crc32c 00:06:02.268 ************************************ 00:06:02.268 18:25:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.268 18:25:24 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:02.268 18:25:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:02.268 18:25:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.268 18:25:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.268 ************************************ 00:06:02.268 START TEST accel_copy_crc32c_C2 00:06:02.268 ************************************ 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:02.268 [2024-07-15 18:25:24.561293] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:02.268 [2024-07-15 18:25:24.561376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63670 ] 00:06:02.268 [2024-07-15 18:25:24.703108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.268 [2024-07-15 18:25:24.788798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.268 18:25:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.644 00:06:03.644 real 0m1.440s 00:06:03.644 user 0m0.021s 00:06:03.644 sys 0m0.004s 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.644 ************************************ 00:06:03.644 END TEST accel_copy_crc32c_C2 00:06:03.644 ************************************ 00:06:03.644 18:25:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:03.644 18:25:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.644 18:25:26 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:03.644 18:25:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:03.644 18:25:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.644 18:25:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.644 ************************************ 00:06:03.644 START TEST accel_dualcast 00:06:03.644 ************************************ 00:06:03.644 18:25:26 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:03.644 18:25:26 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:03.644 [2024-07-15 18:25:26.061290] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:03.644 [2024-07-15 18:25:26.061883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63700 ] 00:06:03.644 [2024-07-15 18:25:26.217468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.903 [2024-07-15 18:25:26.313445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.903 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.904 18:25:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:05.282 18:25:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.282 00:06:05.282 real 0m1.461s 00:06:05.282 user 0m1.262s 00:06:05.282 sys 0m0.112s 00:06:05.282 18:25:27 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.282 18:25:27 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:05.282 ************************************ 00:06:05.282 END TEST accel_dualcast 00:06:05.282 ************************************ 00:06:05.282 18:25:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.282 18:25:27 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:05.282 18:25:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:05.282 18:25:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.282 18:25:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.282 ************************************ 00:06:05.282 START TEST accel_compare 00:06:05.282 ************************************ 00:06:05.282 18:25:27 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:05.282 [2024-07-15 18:25:27.590272] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:05.282 [2024-07-15 18:25:27.590358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63735 ] 00:06:05.282 [2024-07-15 18:25:27.730805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.282 [2024-07-15 18:25:27.814520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.282 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.283 18:25:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:06.690 18:25:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.690 00:06:06.690 real 0m1.435s 00:06:06.690 user 0m1.241s 00:06:06.690 sys 0m0.105s 00:06:06.690 18:25:28 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.690 18:25:28 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 ************************************ 00:06:06.690 END TEST accel_compare 00:06:06.690 ************************************ 00:06:06.690 18:25:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.690 18:25:29 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:06.690 18:25:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:06.690 18:25:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.690 18:25:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 ************************************ 00:06:06.690 START TEST accel_xor 00:06:06.690 ************************************ 00:06:06.690 18:25:29 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:06.690 18:25:29 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:06.691 [2024-07-15 18:25:29.090760] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:06.691 [2024-07-15 18:25:29.090833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63771 ] 00:06:06.691 [2024-07-15 18:25:29.231942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.950 [2024-07-15 18:25:29.316557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.950 18:25:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:07.886 18:25:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.886 00:06:07.886 real 0m1.434s 00:06:07.886 user 0m1.236s 00:06:07.886 sys 0m0.112s 00:06:07.886 18:25:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.886 ************************************ 00:06:07.886 END TEST accel_xor 00:06:07.886 18:25:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:07.886 ************************************ 00:06:08.145 18:25:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.145 18:25:30 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:08.145 18:25:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.145 18:25:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.145 18:25:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.145 ************************************ 00:06:08.145 START TEST accel_xor 00:06:08.145 ************************************ 00:06:08.145 18:25:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:08.145 18:25:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:08.145 [2024-07-15 18:25:30.591351] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:08.145 [2024-07-15 18:25:30.591437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63806 ] 00:06:08.145 [2024-07-15 18:25:30.727755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.404 [2024-07-15 18:25:30.813873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.404 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.404 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.404 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.404 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.404 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.404 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.404 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.404 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.404 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:08.404 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.405 18:25:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:09.383 18:25:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:09.384 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:09.384 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:09.384 18:25:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:09.641 18:25:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:09.641 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:09.641 18:25:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:09.641 18:25:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.641 18:25:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:09.641 18:25:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.641 00:06:09.641 real 0m1.438s 00:06:09.641 user 0m1.248s 00:06:09.641 sys 0m0.099s 00:06:09.641 18:25:31 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.641 ************************************ 00:06:09.641 END TEST accel_xor 00:06:09.641 ************************************ 00:06:09.641 18:25:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:09.641 18:25:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.641 18:25:32 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:09.641 18:25:32 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:09.641 18:25:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.641 18:25:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.641 ************************************ 00:06:09.641 START TEST accel_dif_verify 00:06:09.641 ************************************ 00:06:09.641 18:25:32 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:09.641 18:25:32 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:09.641 [2024-07-15 18:25:32.103744] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:09.641 [2024-07-15 18:25:32.103891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63840 ] 00:06:09.641 [2024-07-15 18:25:32.236439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.898 [2024-07-15 18:25:32.317260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:09.898 18:25:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 ************************************ 00:06:11.272 END TEST accel_dif_verify 00:06:11.272 ************************************ 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:11.272 18:25:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.272 00:06:11.272 real 0m1.423s 00:06:11.272 user 0m0.017s 00:06:11.272 sys 0m0.004s 00:06:11.272 18:25:33 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.272 18:25:33 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:11.272 18:25:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.272 18:25:33 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:11.272 18:25:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:11.272 18:25:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.272 18:25:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.272 ************************************ 00:06:11.272 START TEST accel_dif_generate 00:06:11.272 ************************************ 00:06:11.272 18:25:33 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:11.272 [2024-07-15 18:25:33.583274] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:11.272 [2024-07-15 18:25:33.583367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63875 ] 00:06:11.272 [2024-07-15 18:25:33.718895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.272 [2024-07-15 18:25:33.806173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:11.272 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.273 18:25:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 ************************************ 00:06:12.643 END TEST accel_dif_generate 00:06:12.643 ************************************ 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:12.643 18:25:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.643 00:06:12.643 real 0m1.442s 00:06:12.643 user 0m1.246s 00:06:12.643 sys 0m0.108s 00:06:12.643 18:25:34 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.643 18:25:34 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:12.643 18:25:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.643 18:25:35 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:12.643 18:25:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:12.643 18:25:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.643 18:25:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.643 ************************************ 00:06:12.643 START TEST accel_dif_generate_copy 00:06:12.643 ************************************ 00:06:12.643 18:25:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:12.644 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:12.644 [2024-07-15 18:25:35.092966] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:12.644 [2024-07-15 18:25:35.093214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63904 ] 00:06:12.644 [2024-07-15 18:25:35.234540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.902 [2024-07-15 18:25:35.332411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.902 18:25:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.288 00:06:14.288 real 0m1.453s 00:06:14.288 user 0m1.250s 00:06:14.288 sys 0m0.114s 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.288 ************************************ 00:06:14.288 END TEST accel_dif_generate_copy 00:06:14.288 ************************************ 00:06:14.288 18:25:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:14.288 18:25:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.288 18:25:36 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:14.288 18:25:36 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.288 18:25:36 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:14.288 18:25:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.288 18:25:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.288 ************************************ 00:06:14.288 START TEST accel_comp 00:06:14.288 ************************************ 00:06:14.288 18:25:36 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:14.288 [2024-07-15 18:25:36.614822] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:14.288 [2024-07-15 18:25:36.615042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63938 ] 00:06:14.288 [2024-07-15 18:25:36.758843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.288 [2024-07-15 18:25:36.839284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.288 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:14.545 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.546 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.546 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.546 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.546 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.546 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.546 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.546 18:25:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:14.546 18:25:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.546 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.546 18:25:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:15.481 18:25:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.481 00:06:15.481 real 0m1.447s 00:06:15.481 user 0m1.257s 00:06:15.481 sys 0m0.105s 00:06:15.481 18:25:38 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.481 ************************************ 00:06:15.481 END TEST accel_comp 00:06:15.481 ************************************ 00:06:15.481 18:25:38 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:15.481 18:25:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.481 18:25:38 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:15.481 18:25:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:15.481 18:25:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.481 18:25:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.738 ************************************ 00:06:15.738 START TEST accel_decomp 00:06:15.738 ************************************ 00:06:15.738 18:25:38 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:15.738 18:25:38 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:15.738 [2024-07-15 18:25:38.124084] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:15.738 [2024-07-15 18:25:38.124166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63973 ] 00:06:15.738 [2024-07-15 18:25:38.264332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.738 [2024-07-15 18:25:38.343131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.996 18:25:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:16.927 18:25:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.927 00:06:16.927 real 0m1.436s 00:06:16.927 user 0m1.244s 00:06:16.927 sys 0m0.102s 00:06:16.927 18:25:39 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.927 ************************************ 00:06:16.927 END TEST accel_decomp 00:06:16.927 ************************************ 00:06:16.927 18:25:39 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:17.183 18:25:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.183 18:25:39 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:17.183 18:25:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:17.183 18:25:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.183 18:25:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.183 ************************************ 00:06:17.183 START TEST accel_decomp_full 00:06:17.183 ************************************ 00:06:17.183 18:25:39 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:17.183 18:25:39 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:17.183 [2024-07-15 18:25:39.625198] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:17.183 [2024-07-15 18:25:39.625282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64007 ] 00:06:17.183 [2024-07-15 18:25:39.766225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.442 [2024-07-15 18:25:39.860328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.442 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.443 18:25:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:18.833 18:25:41 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.833 00:06:18.833 real 0m1.465s 00:06:18.833 user 0m1.256s 00:06:18.833 sys 0m0.120s 00:06:18.833 18:25:41 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.833 18:25:41 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:18.833 ************************************ 00:06:18.833 END TEST accel_decomp_full 00:06:18.833 ************************************ 00:06:18.833 18:25:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.833 18:25:41 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:18.833 18:25:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:18.833 18:25:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.833 18:25:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.833 ************************************ 00:06:18.833 START TEST accel_decomp_mcore 00:06:18.833 ************************************ 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:18.833 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:18.833 [2024-07-15 18:25:41.154707] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:18.833 [2024-07-15 18:25:41.154797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64044 ] 00:06:18.833 [2024-07-15 18:25:41.299727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.833 [2024-07-15 18:25:41.400517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.833 [2024-07-15 18:25:41.400628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.833 [2024-07-15 18:25:41.400814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.833 [2024-07-15 18:25:41.400818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.090 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.091 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.091 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.091 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:19.091 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:19.091 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.091 18:25:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.039 00:06:20.039 real 0m1.471s 00:06:20.039 user 0m4.585s 00:06:20.039 sys 0m0.115s 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.039 18:25:42 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:20.039 ************************************ 00:06:20.039 END TEST accel_decomp_mcore 00:06:20.039 ************************************ 00:06:20.039 18:25:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.039 18:25:42 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:20.039 18:25:42 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:20.039 18:25:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.039 18:25:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.297 ************************************ 00:06:20.297 START TEST accel_decomp_full_mcore 00:06:20.297 ************************************ 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:20.297 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:20.297 [2024-07-15 18:25:42.690829] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:20.297 [2024-07-15 18:25:42.690922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64082 ] 00:06:20.297 [2024-07-15 18:25:42.833647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.555 [2024-07-15 18:25:42.935815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.555 [2024-07-15 18:25:42.936002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.555 [2024-07-15 18:25:42.936186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.555 [2024-07-15 18:25:42.936186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.555 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.556 18:25:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.929 ************************************ 00:06:21.929 END TEST accel_decomp_full_mcore 00:06:21.929 ************************************ 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.929 00:06:21.929 real 0m1.488s 00:06:21.929 user 0m4.632s 00:06:21.929 sys 0m0.122s 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.929 18:25:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:21.929 18:25:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.929 18:25:44 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:21.929 18:25:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:21.929 18:25:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.929 18:25:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.929 ************************************ 00:06:21.929 START TEST accel_decomp_mthread 00:06:21.929 ************************************ 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:21.929 [2024-07-15 18:25:44.238960] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:21.929 [2024-07-15 18:25:44.239052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64120 ] 00:06:21.929 [2024-07-15 18:25:44.379784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.929 [2024-07-15 18:25:44.480439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.206 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:22.206 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:22.206 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:22.206 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:22.206 18:25:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.144 00:06:23.144 real 0m1.464s 00:06:23.144 user 0m1.264s 00:06:23.144 sys 0m0.108s 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.144 ************************************ 00:06:23.144 END TEST accel_decomp_mthread 00:06:23.144 ************************************ 00:06:23.144 18:25:45 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:23.144 18:25:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.144 18:25:45 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:23.144 18:25:45 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:23.144 18:25:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.144 18:25:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.144 ************************************ 00:06:23.144 START TEST accel_decomp_full_mthread 00:06:23.144 ************************************ 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:23.144 18:25:45 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:23.402 [2024-07-15 18:25:45.769889] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:23.402 [2024-07-15 18:25:45.769966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64153 ] 00:06:23.402 [2024-07-15 18:25:45.911886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.402 [2024-07-15 18:25:46.011496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.660 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.661 18:25:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.037 00:06:25.037 real 0m1.493s 00:06:25.037 user 0m1.304s 00:06:25.037 sys 0m0.102s 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.037 18:25:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:25.037 ************************************ 00:06:25.037 END TEST accel_decomp_full_mthread 00:06:25.037 ************************************ 00:06:25.037 18:25:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.037 18:25:47 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:25.037 18:25:47 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:25.037 18:25:47 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:25.037 18:25:47 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:25.037 18:25:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.037 18:25:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.037 18:25:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.037 18:25:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.037 18:25:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.037 18:25:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.037 18:25:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.037 18:25:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:25.037 18:25:47 accel -- accel/accel.sh@41 -- # jq -r . 00:06:25.037 ************************************ 00:06:25.037 START TEST accel_dif_functional_tests 00:06:25.037 ************************************ 00:06:25.037 18:25:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:25.037 [2024-07-15 18:25:47.348098] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:25.037 [2024-07-15 18:25:47.348178] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64190 ] 00:06:25.037 [2024-07-15 18:25:47.487746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.037 [2024-07-15 18:25:47.588880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.037 [2024-07-15 18:25:47.589065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.037 [2024-07-15 18:25:47.589065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.297 00:06:25.297 00:06:25.297 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.297 http://cunit.sourceforge.net/ 00:06:25.297 00:06:25.297 00:06:25.297 Suite: accel_dif 00:06:25.297 Test: verify: DIF generated, GUARD check ...passed 00:06:25.297 Test: verify: DIF generated, APPTAG check ...passed 00:06:25.297 Test: verify: DIF generated, REFTAG check ...passed 00:06:25.297 Test: verify: DIF not generated, GUARD check ...passed 00:06:25.297 Test: verify: DIF not generated, APPTAG check ...passed 00:06:25.297 Test: verify: DIF not generated, REFTAG check ...passed 00:06:25.297 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:25.297 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:25.297 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:25.297 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:25.297 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-15 18:25:47.661724] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:25.297 [2024-07-15 18:25:47.661787] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:25.297 [2024-07-15 18:25:47.661811] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:25.297 [2024-07-15 18:25:47.661862] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:25.297 passed 00:06:25.297 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:25.297 Test: verify copy: DIF generated, GUARD check ...passed 00:06:25.297 Test: verify copy: DIF generated, APPTAG check ...[2024-07-15 18:25:47.661983] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:25.297 passed 00:06:25.297 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:25.297 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:25.297 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 18:25:47.662128] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:25.297 [2024-07-15 18:25:47.662158] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:25.297 passed 00:06:25.297 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:25.297 Test: generate copy: DIF generated, GUARD check ...passed 00:06:25.297 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:25.297 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:25.297 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:25.297 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-07-15 18:25:47.662182] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:25.297 passed 00:06:25.297 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:25.297 Test: generate copy: iovecs-len validate ...passed 00:06:25.297 Test: generate copy: buffer alignment validate ...passed 00:06:25.297 00:06:25.297 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.297 suites 1 1 n/a 0 0 00:06:25.297 tests 26 26 26 0 0 00:06:25.297 asserts 115 115 115 0 n/a 00:06:25.297 00:06:25.297 Elapsed time = 0.002 seconds 00:06:25.297 [2024-07-15 18:25:47.662381] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:25.297 00:06:25.297 real 0m0.543s 00:06:25.297 user 0m0.673s 00:06:25.297 sys 0m0.136s 00:06:25.297 18:25:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.297 18:25:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:25.297 ************************************ 00:06:25.297 END TEST accel_dif_functional_tests 00:06:25.297 ************************************ 00:06:25.297 18:25:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.297 00:06:25.297 real 0m33.731s 00:06:25.297 user 0m35.176s 00:06:25.297 sys 0m4.077s 00:06:25.297 18:25:47 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.297 18:25:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.297 ************************************ 00:06:25.297 END TEST accel 00:06:25.297 ************************************ 00:06:25.557 18:25:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.557 18:25:47 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:25.557 18:25:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.557 18:25:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.557 18:25:47 -- common/autotest_common.sh@10 -- # set +x 00:06:25.557 ************************************ 00:06:25.557 START TEST accel_rpc 00:06:25.557 ************************************ 00:06:25.557 18:25:47 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:25.557 * Looking for test storage... 00:06:25.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:25.557 18:25:48 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:25.557 18:25:48 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64260 00:06:25.557 18:25:48 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:25.557 18:25:48 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64260 00:06:25.557 18:25:48 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64260 ']' 00:06:25.557 18:25:48 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.557 18:25:48 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.557 18:25:48 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.557 18:25:48 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.557 18:25:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.557 [2024-07-15 18:25:48.150851] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:25.557 [2024-07-15 18:25:48.150938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64260 ] 00:06:25.815 [2024-07-15 18:25:48.289856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.815 [2024-07-15 18:25:48.388609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.751 18:25:49 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.751 18:25:49 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:26.751 18:25:49 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:26.751 18:25:49 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:26.751 18:25:49 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:26.751 18:25:49 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:26.751 18:25:49 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:26.751 18:25:49 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.751 18:25:49 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.751 18:25:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.751 ************************************ 00:06:26.751 START TEST accel_assign_opcode 00:06:26.751 ************************************ 00:06:26.751 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:26.751 18:25:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:26.751 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.751 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:26.751 [2024-07-15 18:25:49.036144] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:26.751 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.751 18:25:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:26.751 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.751 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:26.752 [2024-07-15 18:25:49.048068] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:26.752 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.752 18:25:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:26.752 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.752 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:26.752 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.752 18:25:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:26.752 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.752 18:25:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:26.752 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:26.752 18:25:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:27.010 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.010 software 00:06:27.010 00:06:27.010 real 0m0.373s 00:06:27.010 user 0m0.051s 00:06:27.010 sys 0m0.016s 00:06:27.010 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.010 18:25:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:27.010 ************************************ 00:06:27.010 END TEST accel_assign_opcode 00:06:27.010 ************************************ 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:27.010 18:25:49 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64260 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64260 ']' 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64260 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64260 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.010 killing process with pid 64260 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64260' 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@967 -- # kill 64260 00:06:27.010 18:25:49 accel_rpc -- common/autotest_common.sh@972 -- # wait 64260 00:06:27.577 00:06:27.577 real 0m2.052s 00:06:27.577 user 0m1.949s 00:06:27.577 sys 0m0.580s 00:06:27.577 18:25:50 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.577 18:25:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.577 ************************************ 00:06:27.577 END TEST accel_rpc 00:06:27.577 ************************************ 00:06:27.577 18:25:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.577 18:25:50 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.577 18:25:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.577 18:25:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.577 18:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.577 ************************************ 00:06:27.577 START TEST app_cmdline 00:06:27.577 ************************************ 00:06:27.577 18:25:50 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.835 * Looking for test storage... 00:06:27.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:27.835 18:25:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.835 18:25:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64371 00:06:27.835 18:25:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.835 18:25:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64371 00:06:27.835 18:25:50 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64371 ']' 00:06:27.835 18:25:50 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.835 18:25:50 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.835 18:25:50 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.835 18:25:50 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.835 18:25:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.835 [2024-07-15 18:25:50.269894] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:27.835 [2024-07-15 18:25:50.269969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64371 ] 00:06:27.835 [2024-07-15 18:25:50.409954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.093 [2024-07-15 18:25:50.540821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.702 18:25:51 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.702 18:25:51 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:28.702 18:25:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:28.702 { 00:06:28.702 "fields": { 00:06:28.702 "commit": "cd61d4ab3", 00:06:28.702 "major": 24, 00:06:28.702 "minor": 9, 00:06:28.702 "patch": 0, 00:06:28.702 "suffix": "-pre" 00:06:28.702 }, 00:06:28.702 "version": "SPDK v24.09-pre git sha1 cd61d4ab3" 00:06:28.702 } 00:06:28.702 18:25:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.702 18:25:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.703 18:25:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.703 18:25:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.703 18:25:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.703 18:25:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.703 18:25:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.703 18:25:51 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.703 18:25:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.962 18:25:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.962 18:25:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.962 18:25:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.962 2024/07/15 18:25:51 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:28.962 request: 00:06:28.962 { 00:06:28.962 "method": "env_dpdk_get_mem_stats", 00:06:28.962 "params": {} 00:06:28.962 } 00:06:28.962 Got JSON-RPC error response 00:06:28.962 GoRPCClient: error on JSON-RPC call 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.962 18:25:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64371 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64371 ']' 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64371 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.962 18:25:51 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64371 00:06:29.221 18:25:51 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.221 killing process with pid 64371 00:06:29.221 18:25:51 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.221 18:25:51 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64371' 00:06:29.221 18:25:51 app_cmdline -- common/autotest_common.sh@967 -- # kill 64371 00:06:29.221 18:25:51 app_cmdline -- common/autotest_common.sh@972 -- # wait 64371 00:06:29.788 00:06:29.788 real 0m2.026s 00:06:29.788 user 0m2.185s 00:06:29.788 sys 0m0.606s 00:06:29.788 ************************************ 00:06:29.788 END TEST app_cmdline 00:06:29.788 ************************************ 00:06:29.788 18:25:52 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.788 18:25:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.788 18:25:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.788 18:25:52 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:29.788 18:25:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.788 18:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.788 18:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.788 ************************************ 00:06:29.788 START TEST version 00:06:29.788 ************************************ 00:06:29.788 18:25:52 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:29.788 * Looking for test storage... 00:06:29.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:29.788 18:25:52 version -- app/version.sh@17 -- # get_header_version major 00:06:29.788 18:25:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.788 18:25:52 version -- app/version.sh@14 -- # cut -f2 00:06:29.788 18:25:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.788 18:25:52 version -- app/version.sh@17 -- # major=24 00:06:29.788 18:25:52 version -- app/version.sh@18 -- # get_header_version minor 00:06:29.788 18:25:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.788 18:25:52 version -- app/version.sh@14 -- # cut -f2 00:06:29.788 18:25:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.788 18:25:52 version -- app/version.sh@18 -- # minor=9 00:06:29.788 18:25:52 version -- app/version.sh@19 -- # get_header_version patch 00:06:29.788 18:25:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.788 18:25:52 version -- app/version.sh@14 -- # cut -f2 00:06:29.788 18:25:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.788 18:25:52 version -- app/version.sh@19 -- # patch=0 00:06:29.788 18:25:52 version -- app/version.sh@20 -- # get_header_version suffix 00:06:29.788 18:25:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.788 18:25:52 version -- app/version.sh@14 -- # cut -f2 00:06:29.788 18:25:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.788 18:25:52 version -- app/version.sh@20 -- # suffix=-pre 00:06:29.788 18:25:52 version -- app/version.sh@22 -- # version=24.9 00:06:29.788 18:25:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:29.788 18:25:52 version -- app/version.sh@28 -- # version=24.9rc0 00:06:29.788 18:25:52 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:29.788 18:25:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:30.048 18:25:52 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:30.048 18:25:52 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:30.048 00:06:30.048 real 0m0.224s 00:06:30.048 user 0m0.117s 00:06:30.048 sys 0m0.164s 00:06:30.048 18:25:52 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.048 18:25:52 version -- common/autotest_common.sh@10 -- # set +x 00:06:30.048 ************************************ 00:06:30.048 END TEST version 00:06:30.048 ************************************ 00:06:30.048 18:25:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:30.048 18:25:52 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:30.048 18:25:52 -- spdk/autotest.sh@198 -- # uname -s 00:06:30.048 18:25:52 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:30.048 18:25:52 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:30.048 18:25:52 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:30.048 18:25:52 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:30.048 18:25:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:30.048 18:25:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:30.048 18:25:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:30.048 18:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:30.048 18:25:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:30.048 18:25:52 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:30.048 18:25:52 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:30.048 18:25:52 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:30.048 18:25:52 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:30.048 18:25:52 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:30.048 18:25:52 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:30.048 18:25:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:30.048 18:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.048 18:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:30.048 ************************************ 00:06:30.048 START TEST nvmf_tcp 00:06:30.048 ************************************ 00:06:30.048 18:25:52 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:30.308 * Looking for test storage... 00:06:30.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.308 18:25:52 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.309 18:25:52 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.309 18:25:52 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.309 18:25:52 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.309 18:25:52 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.309 18:25:52 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.309 18:25:52 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.309 18:25:52 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:30.309 18:25:52 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:30.309 18:25:52 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.309 18:25:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:30.309 18:25:52 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:30.309 18:25:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:30.309 18:25:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.309 18:25:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.309 ************************************ 00:06:30.309 START TEST nvmf_example 00:06:30.309 ************************************ 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:30.309 * Looking for test storage... 00:06:30.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:30.309 18:25:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:30.569 Cannot find device "nvmf_init_br" 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:30.569 Cannot find device "nvmf_tgt_br" 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:30.569 Cannot find device "nvmf_tgt_br2" 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:30.569 Cannot find device "nvmf_init_br" 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:06:30.569 18:25:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:30.569 Cannot find device "nvmf_tgt_br" 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:30.569 Cannot find device "nvmf_tgt_br2" 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:30.569 Cannot find device "nvmf_br" 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:30.569 Cannot find device "nvmf_init_if" 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:30.569 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:30.569 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:30.569 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:30.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:06:30.837 00:06:30.837 --- 10.0.0.2 ping statistics --- 00:06:30.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.837 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:30.837 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:30.837 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:06:30.837 00:06:30.837 --- 10.0.0.3 ping statistics --- 00:06:30.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.837 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:30.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:06:30.837 00:06:30.837 --- 10.0.0.1 ping statistics --- 00:06:30.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.837 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64719 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64719 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 64719 ']' 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.837 18:25:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:31.775 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.775 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:31.775 18:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:31.775 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.775 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:31.775 18:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:31.775 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.775 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:32.037 18:25:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:44.301 Initializing NVMe Controllers 00:06:44.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:44.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:44.301 Initialization complete. Launching workers. 00:06:44.301 ======================================================== 00:06:44.301 Latency(us) 00:06:44.301 Device Information : IOPS MiB/s Average min max 00:06:44.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16406.35 64.09 3900.73 688.01 23091.89 00:06:44.301 ======================================================== 00:06:44.301 Total : 16406.35 64.09 3900.73 688.01 23091.89 00:06:44.301 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:44.301 rmmod nvme_tcp 00:06:44.301 rmmod nvme_fabrics 00:06:44.301 rmmod nvme_keyring 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64719 ']' 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64719 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 64719 ']' 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 64719 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64719 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:44.301 killing process with pid 64719 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64719' 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 64719 00:06:44.301 18:26:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 64719 00:06:44.301 nvmf threads initialize successfully 00:06:44.301 bdev subsystem init successfully 00:06:44.301 created a nvmf target service 00:06:44.301 create targets's poll groups done 00:06:44.301 all subsystems of target started 00:06:44.301 nvmf target is running 00:06:44.301 all subsystems of target stopped 00:06:44.301 destroy targets's poll groups done 00:06:44.301 destroyed the nvmf target service 00:06:44.301 bdev subsystem finish successfully 00:06:44.301 nvmf threads destroy successfully 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.301 00:06:44.301 real 0m12.422s 00:06:44.301 user 0m43.360s 00:06:44.301 sys 0m2.506s 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.301 18:26:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.301 ************************************ 00:06:44.301 END TEST nvmf_example 00:06:44.301 ************************************ 00:06:44.301 18:26:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:44.301 18:26:05 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:44.301 18:26:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:44.301 18:26:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.301 18:26:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.301 ************************************ 00:06:44.301 START TEST nvmf_filesystem 00:06:44.301 ************************************ 00:06:44.301 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:44.301 * Looking for test storage... 00:06:44.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:44.301 18:26:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:44.301 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:44.301 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:44.302 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:44.302 #define SPDK_CONFIG_H 00:06:44.302 #define SPDK_CONFIG_APPS 1 00:06:44.302 #define SPDK_CONFIG_ARCH native 00:06:44.302 #undef SPDK_CONFIG_ASAN 00:06:44.302 #define SPDK_CONFIG_AVAHI 1 00:06:44.302 #undef SPDK_CONFIG_CET 00:06:44.302 #define SPDK_CONFIG_COVERAGE 1 00:06:44.302 #define SPDK_CONFIG_CROSS_PREFIX 00:06:44.302 #undef SPDK_CONFIG_CRYPTO 00:06:44.302 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:44.302 #undef SPDK_CONFIG_CUSTOMOCF 00:06:44.302 #undef SPDK_CONFIG_DAOS 00:06:44.302 #define SPDK_CONFIG_DAOS_DIR 00:06:44.302 #define SPDK_CONFIG_DEBUG 1 00:06:44.302 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:44.302 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:44.302 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:44.302 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:44.302 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:44.302 #undef SPDK_CONFIG_DPDK_UADK 00:06:44.302 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:44.302 #define SPDK_CONFIG_EXAMPLES 1 00:06:44.302 #undef SPDK_CONFIG_FC 00:06:44.302 #define SPDK_CONFIG_FC_PATH 00:06:44.302 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:44.302 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:44.302 #undef SPDK_CONFIG_FUSE 00:06:44.302 #undef SPDK_CONFIG_FUZZER 00:06:44.302 #define SPDK_CONFIG_FUZZER_LIB 00:06:44.302 #define SPDK_CONFIG_GOLANG 1 00:06:44.302 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:44.302 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:44.302 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:44.302 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:44.302 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:44.302 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:44.302 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:44.302 #define SPDK_CONFIG_IDXD 1 00:06:44.303 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:44.303 #undef SPDK_CONFIG_IPSEC_MB 00:06:44.303 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:44.303 #define SPDK_CONFIG_ISAL 1 00:06:44.303 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:44.303 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:44.303 #define SPDK_CONFIG_LIBDIR 00:06:44.303 #undef SPDK_CONFIG_LTO 00:06:44.303 #define SPDK_CONFIG_MAX_LCORES 128 00:06:44.303 #define SPDK_CONFIG_NVME_CUSE 1 00:06:44.303 #undef SPDK_CONFIG_OCF 00:06:44.303 #define SPDK_CONFIG_OCF_PATH 00:06:44.303 #define SPDK_CONFIG_OPENSSL_PATH 00:06:44.303 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:44.303 #define SPDK_CONFIG_PGO_DIR 00:06:44.303 #undef SPDK_CONFIG_PGO_USE 00:06:44.303 #define SPDK_CONFIG_PREFIX /usr/local 00:06:44.303 #undef SPDK_CONFIG_RAID5F 00:06:44.303 #undef SPDK_CONFIG_RBD 00:06:44.303 #define SPDK_CONFIG_RDMA 1 00:06:44.303 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:44.303 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:44.303 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:44.303 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:44.303 #define SPDK_CONFIG_SHARED 1 00:06:44.303 #undef SPDK_CONFIG_SMA 00:06:44.303 #define SPDK_CONFIG_TESTS 1 00:06:44.303 #undef SPDK_CONFIG_TSAN 00:06:44.303 #define SPDK_CONFIG_UBLK 1 00:06:44.303 #define SPDK_CONFIG_UBSAN 1 00:06:44.303 #undef SPDK_CONFIG_UNIT_TESTS 00:06:44.303 #undef SPDK_CONFIG_URING 00:06:44.303 #define SPDK_CONFIG_URING_PATH 00:06:44.303 #undef SPDK_CONFIG_URING_ZNS 00:06:44.303 #define SPDK_CONFIG_USDT 1 00:06:44.303 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:44.303 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:44.303 #undef SPDK_CONFIG_VFIO_USER 00:06:44.303 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:44.303 #define SPDK_CONFIG_VHOST 1 00:06:44.303 #define SPDK_CONFIG_VIRTIO 1 00:06:44.303 #undef SPDK_CONFIG_VTUNE 00:06:44.303 #define SPDK_CONFIG_VTUNE_DIR 00:06:44.303 #define SPDK_CONFIG_WERROR 1 00:06:44.303 #define SPDK_CONFIG_WPDK_DIR 00:06:44.303 #undef SPDK_CONFIG_XNVME 00:06:44.303 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:44.303 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:44.304 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 64963 ]] 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 64963 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.NFgtdk 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.NFgtdk/tests/target /tmp/spdk.NFgtdk 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264516608 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13787197440 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5242621952 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13787197440 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5242621952 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267756544 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267895808 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=139264 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=91760283648 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7942496256 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:44.305 * Looking for test storage... 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13787197440 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:44.305 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:44.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:44.306 Cannot find device "nvmf_tgt_br" 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:44.306 Cannot find device "nvmf_tgt_br2" 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:44.306 Cannot find device "nvmf_tgt_br" 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:44.306 Cannot find device "nvmf_tgt_br2" 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:44.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:44.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:44.306 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:44.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:06:44.307 00:06:44.307 --- 10.0.0.2 ping statistics --- 00:06:44.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.307 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:44.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:44.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:06:44.307 00:06:44.307 --- 10.0.0.3 ping statistics --- 00:06:44.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.307 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:06:44.307 18:26:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:44.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:06:44.307 00:06:44.307 --- 10.0.0.1 ping statistics --- 00:06:44.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.307 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.307 ************************************ 00:06:44.307 START TEST nvmf_filesystem_no_in_capsule 00:06:44.307 ************************************ 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65131 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65131 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65131 ']' 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.307 18:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.307 [2024-07-15 18:26:06.130024] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:44.307 [2024-07-15 18:26:06.130153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.307 [2024-07-15 18:26:06.274214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.307 [2024-07-15 18:26:06.425136] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.307 [2024-07-15 18:26:06.425190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.307 [2024-07-15 18:26:06.425206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.307 [2024-07-15 18:26:06.425215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.307 [2024-07-15 18:26:06.425222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.307 [2024-07-15 18:26:06.425395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.307 [2024-07-15 18:26:06.425474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.307 [2024-07-15 18:26:06.426355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.307 [2024-07-15 18:26:06.426357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.565 [2024-07-15 18:26:07.078409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:44.565 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.566 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.824 Malloc1 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.824 [2024-07-15 18:26:07.326034] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:44.824 { 00:06:44.824 "aliases": [ 00:06:44.824 "2b55965b-e899-444e-a89f-7e8ec4229e67" 00:06:44.824 ], 00:06:44.824 "assigned_rate_limits": { 00:06:44.824 "r_mbytes_per_sec": 0, 00:06:44.824 "rw_ios_per_sec": 0, 00:06:44.824 "rw_mbytes_per_sec": 0, 00:06:44.824 "w_mbytes_per_sec": 0 00:06:44.824 }, 00:06:44.824 "block_size": 512, 00:06:44.824 "claim_type": "exclusive_write", 00:06:44.824 "claimed": true, 00:06:44.824 "driver_specific": {}, 00:06:44.824 "memory_domains": [ 00:06:44.824 { 00:06:44.824 "dma_device_id": "system", 00:06:44.824 "dma_device_type": 1 00:06:44.824 }, 00:06:44.824 { 00:06:44.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.824 "dma_device_type": 2 00:06:44.824 } 00:06:44.824 ], 00:06:44.824 "name": "Malloc1", 00:06:44.824 "num_blocks": 1048576, 00:06:44.824 "product_name": "Malloc disk", 00:06:44.824 "supported_io_types": { 00:06:44.824 "abort": true, 00:06:44.824 "compare": false, 00:06:44.824 "compare_and_write": false, 00:06:44.824 "copy": true, 00:06:44.824 "flush": true, 00:06:44.824 "get_zone_info": false, 00:06:44.824 "nvme_admin": false, 00:06:44.824 "nvme_io": false, 00:06:44.824 "nvme_io_md": false, 00:06:44.824 "nvme_iov_md": false, 00:06:44.824 "read": true, 00:06:44.824 "reset": true, 00:06:44.824 "seek_data": false, 00:06:44.824 "seek_hole": false, 00:06:44.824 "unmap": true, 00:06:44.824 "write": true, 00:06:44.824 "write_zeroes": true, 00:06:44.824 "zcopy": true, 00:06:44.824 "zone_append": false, 00:06:44.824 "zone_management": false 00:06:44.824 }, 00:06:44.824 "uuid": "2b55965b-e899-444e-a89f-7e8ec4229e67", 00:06:44.824 "zoned": false 00:06:44.824 } 00:06:44.824 ]' 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:44.824 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:45.082 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:45.082 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:45.082 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:45.082 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:45.082 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:45.082 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:45.082 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:45.082 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:45.082 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:45.082 18:26:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:47.616 18:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.551 ************************************ 00:06:48.551 START TEST filesystem_ext4 00:06:48.551 ************************************ 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:48.551 mke2fs 1.46.5 (30-Dec-2021) 00:06:48.551 Discarding device blocks: 0/522240 done 00:06:48.551 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:48.551 Filesystem UUID: 76dc4d93-37a5-4c9e-a7b1-cd9265c41b57 00:06:48.551 Superblock backups stored on blocks: 00:06:48.551 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:48.551 00:06:48.551 Allocating group tables: 0/64 done 00:06:48.551 Writing inode tables: 0/64 done 00:06:48.551 Creating journal (8192 blocks): done 00:06:48.551 Writing superblocks and filesystem accounting information: 0/64 done 00:06:48.551 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:48.551 18:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:48.551 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65131 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:48.810 00:06:48.810 real 0m0.468s 00:06:48.810 user 0m0.032s 00:06:48.810 sys 0m0.074s 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:48.810 ************************************ 00:06:48.810 END TEST filesystem_ext4 00:06:48.810 ************************************ 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.810 ************************************ 00:06:48.810 START TEST filesystem_btrfs 00:06:48.810 ************************************ 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:48.810 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:49.069 btrfs-progs v6.6.2 00:06:49.069 See https://btrfs.readthedocs.io for more information. 00:06:49.069 00:06:49.069 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:49.069 NOTE: several default settings have changed in version 5.15, please make sure 00:06:49.069 this does not affect your deployments: 00:06:49.069 - DUP for metadata (-m dup) 00:06:49.069 - enabled no-holes (-O no-holes) 00:06:49.069 - enabled free-space-tree (-R free-space-tree) 00:06:49.069 00:06:49.069 Label: (null) 00:06:49.069 UUID: 57cfad25-c25e-44e9-a4ed-742c0f93cb41 00:06:49.069 Node size: 16384 00:06:49.069 Sector size: 4096 00:06:49.069 Filesystem size: 510.00MiB 00:06:49.069 Block group profiles: 00:06:49.069 Data: single 8.00MiB 00:06:49.069 Metadata: DUP 32.00MiB 00:06:49.069 System: DUP 8.00MiB 00:06:49.069 SSD detected: yes 00:06:49.069 Zoned device: no 00:06:49.069 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:49.069 Runtime features: free-space-tree 00:06:49.069 Checksum: crc32c 00:06:49.069 Number of devices: 1 00:06:49.069 Devices: 00:06:49.069 ID SIZE PATH 00:06:49.069 1 510.00MiB /dev/nvme0n1p1 00:06:49.069 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65131 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:49.069 00:06:49.069 real 0m0.283s 00:06:49.069 user 0m0.024s 00:06:49.069 sys 0m0.099s 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.069 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:49.069 ************************************ 00:06:49.069 END TEST filesystem_btrfs 00:06:49.069 ************************************ 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.328 ************************************ 00:06:49.328 START TEST filesystem_xfs 00:06:49.328 ************************************ 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:49.328 18:26:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:49.328 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:49.328 = sectsz=512 attr=2, projid32bit=1 00:06:49.328 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:49.329 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:49.329 data = bsize=4096 blocks=130560, imaxpct=25 00:06:49.329 = sunit=0 swidth=0 blks 00:06:49.329 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:49.329 log =internal log bsize=4096 blocks=16384, version=2 00:06:49.329 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:49.329 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:49.895 Discarding blocks...Done. 00:06:49.895 18:26:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:49.895 18:26:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65131 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:52.429 00:06:52.429 real 0m3.032s 00:06:52.429 user 0m0.029s 00:06:52.429 sys 0m0.067s 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:52.429 ************************************ 00:06:52.429 END TEST filesystem_xfs 00:06:52.429 ************************************ 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:52.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65131 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65131 ']' 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65131 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65131 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.429 killing process with pid 65131 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65131' 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65131 00:06:52.429 18:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65131 00:06:52.994 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:52.994 00:06:52.994 real 0m9.245s 00:06:52.994 user 0m34.597s 00:06:52.994 sys 0m1.842s 00:06:52.994 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.994 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.994 ************************************ 00:06:52.994 END TEST nvmf_filesystem_no_in_capsule 00:06:52.994 ************************************ 00:06:52.994 18:26:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:52.994 18:26:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:52.994 18:26:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:52.994 18:26:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.994 18:26:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.994 ************************************ 00:06:52.994 START TEST nvmf_filesystem_in_capsule 00:06:52.994 ************************************ 00:06:52.994 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65437 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65437 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65437 ']' 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.995 18:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.995 [2024-07-15 18:26:15.438720] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:06:52.995 [2024-07-15 18:26:15.438806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.995 [2024-07-15 18:26:15.567870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.252 [2024-07-15 18:26:15.670180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.252 [2024-07-15 18:26:15.670224] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.252 [2024-07-15 18:26:15.670233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.252 [2024-07-15 18:26:15.670242] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.252 [2024-07-15 18:26:15.670248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.252 [2024-07-15 18:26:15.670996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.252 [2024-07-15 18:26:15.674645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.252 [2024-07-15 18:26:15.678202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.252 [2024-07-15 18:26:15.678203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.838 [2024-07-15 18:26:16.363936] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.838 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.095 Malloc1 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.095 [2024-07-15 18:26:16.531531] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:54.095 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:54.096 { 00:06:54.096 "aliases": [ 00:06:54.096 "f237dc88-9900-4680-af2a-bee9c4092da8" 00:06:54.096 ], 00:06:54.096 "assigned_rate_limits": { 00:06:54.096 "r_mbytes_per_sec": 0, 00:06:54.096 "rw_ios_per_sec": 0, 00:06:54.096 "rw_mbytes_per_sec": 0, 00:06:54.096 "w_mbytes_per_sec": 0 00:06:54.096 }, 00:06:54.096 "block_size": 512, 00:06:54.096 "claim_type": "exclusive_write", 00:06:54.096 "claimed": true, 00:06:54.096 "driver_specific": {}, 00:06:54.096 "memory_domains": [ 00:06:54.096 { 00:06:54.096 "dma_device_id": "system", 00:06:54.096 "dma_device_type": 1 00:06:54.096 }, 00:06:54.096 { 00:06:54.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.096 "dma_device_type": 2 00:06:54.096 } 00:06:54.096 ], 00:06:54.096 "name": "Malloc1", 00:06:54.096 "num_blocks": 1048576, 00:06:54.096 "product_name": "Malloc disk", 00:06:54.096 "supported_io_types": { 00:06:54.096 "abort": true, 00:06:54.096 "compare": false, 00:06:54.096 "compare_and_write": false, 00:06:54.096 "copy": true, 00:06:54.096 "flush": true, 00:06:54.096 "get_zone_info": false, 00:06:54.096 "nvme_admin": false, 00:06:54.096 "nvme_io": false, 00:06:54.096 "nvme_io_md": false, 00:06:54.096 "nvme_iov_md": false, 00:06:54.096 "read": true, 00:06:54.096 "reset": true, 00:06:54.096 "seek_data": false, 00:06:54.096 "seek_hole": false, 00:06:54.096 "unmap": true, 00:06:54.096 "write": true, 00:06:54.096 "write_zeroes": true, 00:06:54.096 "zcopy": true, 00:06:54.096 "zone_append": false, 00:06:54.096 "zone_management": false 00:06:54.096 }, 00:06:54.096 "uuid": "f237dc88-9900-4680-af2a-bee9c4092da8", 00:06:54.096 "zoned": false 00:06:54.096 } 00:06:54.096 ]' 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:54.096 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:54.353 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:54.353 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:54.353 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:54.353 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:54.353 18:26:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:56.245 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:56.245 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:56.245 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:56.501 18:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:57.488 18:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:57.488 18:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:57.488 ************************************ 00:06:57.488 START TEST filesystem_in_capsule_ext4 00:06:57.488 ************************************ 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:57.488 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:57.488 mke2fs 1.46.5 (30-Dec-2021) 00:06:57.745 Discarding device blocks: 0/522240 done 00:06:57.745 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:57.745 Filesystem UUID: 8bc02835-3aec-4e66-b860-d5c753eb6d50 00:06:57.745 Superblock backups stored on blocks: 00:06:57.745 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:57.745 00:06:57.745 Allocating group tables: 0/64 done 00:06:57.746 Writing inode tables: 0/64 done 00:06:57.746 Creating journal (8192 blocks): done 00:06:57.746 Writing superblocks and filesystem accounting information: 0/64 done 00:06:57.746 00:06:57.746 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:57.746 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:57.746 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:57.746 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:57.746 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:57.746 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:57.746 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:57.746 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65437 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:58.004 00:06:58.004 real 0m0.382s 00:06:58.004 user 0m0.032s 00:06:58.004 sys 0m0.086s 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:58.004 ************************************ 00:06:58.004 END TEST filesystem_in_capsule_ext4 00:06:58.004 ************************************ 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.004 ************************************ 00:06:58.004 START TEST filesystem_in_capsule_btrfs 00:06:58.004 ************************************ 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:58.004 btrfs-progs v6.6.2 00:06:58.004 See https://btrfs.readthedocs.io for more information. 00:06:58.004 00:06:58.004 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:58.004 NOTE: several default settings have changed in version 5.15, please make sure 00:06:58.004 this does not affect your deployments: 00:06:58.004 - DUP for metadata (-m dup) 00:06:58.004 - enabled no-holes (-O no-holes) 00:06:58.004 - enabled free-space-tree (-R free-space-tree) 00:06:58.004 00:06:58.004 Label: (null) 00:06:58.004 UUID: cf0719c6-0d35-42e4-b1ba-ce88ca51ddd6 00:06:58.004 Node size: 16384 00:06:58.004 Sector size: 4096 00:06:58.004 Filesystem size: 510.00MiB 00:06:58.004 Block group profiles: 00:06:58.004 Data: single 8.00MiB 00:06:58.004 Metadata: DUP 32.00MiB 00:06:58.004 System: DUP 8.00MiB 00:06:58.004 SSD detected: yes 00:06:58.004 Zoned device: no 00:06:58.004 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:58.004 Runtime features: free-space-tree 00:06:58.004 Checksum: crc32c 00:06:58.004 Number of devices: 1 00:06:58.004 Devices: 00:06:58.004 ID SIZE PATH 00:06:58.004 1 510.00MiB /dev/nvme0n1p1 00:06:58.004 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:58.004 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65437 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:58.262 00:06:58.262 real 0m0.245s 00:06:58.262 user 0m0.024s 00:06:58.262 sys 0m0.096s 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:58.262 ************************************ 00:06:58.262 END TEST filesystem_in_capsule_btrfs 00:06:58.262 ************************************ 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.262 ************************************ 00:06:58.262 START TEST filesystem_in_capsule_xfs 00:06:58.262 ************************************ 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:58.262 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:58.263 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:58.263 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:58.263 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:58.263 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:58.263 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:58.263 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:58.263 18:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:58.520 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:58.520 = sectsz=512 attr=2, projid32bit=1 00:06:58.520 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:58.520 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:58.520 data = bsize=4096 blocks=130560, imaxpct=25 00:06:58.520 = sunit=0 swidth=0 blks 00:06:58.520 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:58.520 log =internal log bsize=4096 blocks=16384, version=2 00:06:58.520 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:58.520 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:59.084 Discarding blocks...Done. 00:06:59.084 18:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:59.084 18:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65437 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.983 ************************************ 00:07:00.983 END TEST filesystem_in_capsule_xfs 00:07:00.983 ************************************ 00:07:00.983 00:07:00.983 real 0m2.706s 00:07:00.983 user 0m0.026s 00:07:00.983 sys 0m0.081s 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:00.983 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:01.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65437 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65437 ']' 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65437 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65437 00:07:01.242 killing process with pid 65437 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65437' 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65437 00:07:01.242 18:26:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65437 00:07:01.500 18:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:01.500 00:07:01.500 real 0m8.709s 00:07:01.500 user 0m32.972s 00:07:01.500 sys 0m1.644s 00:07:01.500 ************************************ 00:07:01.500 END TEST nvmf_filesystem_in_capsule 00:07:01.500 ************************************ 00:07:01.500 18:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.500 18:26:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.759 18:26:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:01.760 rmmod nvme_tcp 00:07:01.760 rmmod nvme_fabrics 00:07:01.760 rmmod nvme_keyring 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:01.760 00:07:01.760 real 0m19.073s 00:07:01.760 user 1m7.901s 00:07:01.760 sys 0m4.062s 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.760 ************************************ 00:07:01.760 END TEST nvmf_filesystem 00:07:01.760 ************************************ 00:07:01.760 18:26:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.760 18:26:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:01.760 18:26:24 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:01.760 18:26:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.760 18:26:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.760 18:26:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.760 ************************************ 00:07:01.760 START TEST nvmf_target_discovery 00:07:01.760 ************************************ 00:07:01.760 18:26:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:02.018 * Looking for test storage... 00:07:02.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:02.018 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:02.019 Cannot find device "nvmf_tgt_br" 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:02.019 Cannot find device "nvmf_tgt_br2" 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:02.019 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:02.278 Cannot find device "nvmf_tgt_br" 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:02.278 Cannot find device "nvmf_tgt_br2" 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:02.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:02.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:02.278 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:02.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:07:02.537 00:07:02.537 --- 10.0.0.2 ping statistics --- 00:07:02.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.537 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:02.537 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:02.537 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:07:02.537 00:07:02.537 --- 10.0.0.3 ping statistics --- 00:07:02.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.537 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:02.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:02.537 00:07:02.537 --- 10.0.0.1 ping statistics --- 00:07:02.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.537 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:02.537 18:26:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=65897 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 65897 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 65897 ']' 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.537 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.537 [2024-07-15 18:26:25.075129] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:07:02.538 [2024-07-15 18:26:25.075324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.797 [2024-07-15 18:26:25.218392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.797 [2024-07-15 18:26:25.331170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.797 [2024-07-15 18:26:25.331220] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.797 [2024-07-15 18:26:25.331230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.797 [2024-07-15 18:26:25.331238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.797 [2024-07-15 18:26:25.331245] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.797 [2024-07-15 18:26:25.331388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.797 [2024-07-15 18:26:25.331606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.797 [2024-07-15 18:26:25.332285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.797 [2024-07-15 18:26:25.332285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.365 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.365 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:03.365 18:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:03.365 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:03.365 18:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 [2024-07-15 18:26:26.038845] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 Null1 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 [2024-07-15 18:26:26.121881] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 Null2 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 Null3 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 Null4 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.625 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -a 10.0.0.2 -s 4420 00:07:03.885 00:07:03.885 Discovery Log Number of Records 6, Generation counter 6 00:07:03.885 =====Discovery Log Entry 0====== 00:07:03.885 trtype: tcp 00:07:03.885 adrfam: ipv4 00:07:03.885 subtype: current discovery subsystem 00:07:03.885 treq: not required 00:07:03.885 portid: 0 00:07:03.885 trsvcid: 4420 00:07:03.885 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:03.885 traddr: 10.0.0.2 00:07:03.885 eflags: explicit discovery connections, duplicate discovery information 00:07:03.885 sectype: none 00:07:03.885 =====Discovery Log Entry 1====== 00:07:03.885 trtype: tcp 00:07:03.885 adrfam: ipv4 00:07:03.885 subtype: nvme subsystem 00:07:03.885 treq: not required 00:07:03.885 portid: 0 00:07:03.885 trsvcid: 4420 00:07:03.885 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:03.885 traddr: 10.0.0.2 00:07:03.885 eflags: none 00:07:03.885 sectype: none 00:07:03.885 =====Discovery Log Entry 2====== 00:07:03.885 trtype: tcp 00:07:03.885 adrfam: ipv4 00:07:03.885 subtype: nvme subsystem 00:07:03.885 treq: not required 00:07:03.885 portid: 0 00:07:03.885 trsvcid: 4420 00:07:03.885 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:03.885 traddr: 10.0.0.2 00:07:03.885 eflags: none 00:07:03.885 sectype: none 00:07:03.885 =====Discovery Log Entry 3====== 00:07:03.885 trtype: tcp 00:07:03.885 adrfam: ipv4 00:07:03.885 subtype: nvme subsystem 00:07:03.885 treq: not required 00:07:03.885 portid: 0 00:07:03.885 trsvcid: 4420 00:07:03.885 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:03.885 traddr: 10.0.0.2 00:07:03.885 eflags: none 00:07:03.885 sectype: none 00:07:03.885 =====Discovery Log Entry 4====== 00:07:03.885 trtype: tcp 00:07:03.885 adrfam: ipv4 00:07:03.885 subtype: nvme subsystem 00:07:03.885 treq: not required 00:07:03.885 portid: 0 00:07:03.885 trsvcid: 4420 00:07:03.885 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:03.885 traddr: 10.0.0.2 00:07:03.885 eflags: none 00:07:03.885 sectype: none 00:07:03.885 =====Discovery Log Entry 5====== 00:07:03.885 trtype: tcp 00:07:03.885 adrfam: ipv4 00:07:03.885 subtype: discovery subsystem referral 00:07:03.885 treq: not required 00:07:03.885 portid: 0 00:07:03.885 trsvcid: 4430 00:07:03.885 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:03.885 traddr: 10.0.0.2 00:07:03.885 eflags: none 00:07:03.885 sectype: none 00:07:03.885 Perform nvmf subsystem discovery via RPC 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.885 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.885 [ 00:07:03.885 { 00:07:03.885 "allow_any_host": true, 00:07:03.885 "hosts": [], 00:07:03.885 "listen_addresses": [ 00:07:03.885 { 00:07:03.885 "adrfam": "IPv4", 00:07:03.885 "traddr": "10.0.0.2", 00:07:03.885 "trsvcid": "4420", 00:07:03.885 "trtype": "TCP" 00:07:03.885 } 00:07:03.885 ], 00:07:03.885 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:03.885 "subtype": "Discovery" 00:07:03.885 }, 00:07:03.885 { 00:07:03.885 "allow_any_host": true, 00:07:03.885 "hosts": [], 00:07:03.885 "listen_addresses": [ 00:07:03.885 { 00:07:03.885 "adrfam": "IPv4", 00:07:03.885 "traddr": "10.0.0.2", 00:07:03.885 "trsvcid": "4420", 00:07:03.885 "trtype": "TCP" 00:07:03.885 } 00:07:03.885 ], 00:07:03.885 "max_cntlid": 65519, 00:07:03.885 "max_namespaces": 32, 00:07:03.885 "min_cntlid": 1, 00:07:03.885 "model_number": "SPDK bdev Controller", 00:07:03.885 "namespaces": [ 00:07:03.885 { 00:07:03.885 "bdev_name": "Null1", 00:07:03.885 "name": "Null1", 00:07:03.885 "nguid": "01621809280A48C3A22379B56BD7410B", 00:07:03.885 "nsid": 1, 00:07:03.885 "uuid": "01621809-280a-48c3-a223-79b56bd7410b" 00:07:03.885 } 00:07:03.885 ], 00:07:03.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:03.885 "serial_number": "SPDK00000000000001", 00:07:03.885 "subtype": "NVMe" 00:07:03.885 }, 00:07:03.885 { 00:07:03.885 "allow_any_host": true, 00:07:03.885 "hosts": [], 00:07:03.885 "listen_addresses": [ 00:07:03.885 { 00:07:03.885 "adrfam": "IPv4", 00:07:03.885 "traddr": "10.0.0.2", 00:07:03.885 "trsvcid": "4420", 00:07:03.885 "trtype": "TCP" 00:07:03.885 } 00:07:03.885 ], 00:07:03.885 "max_cntlid": 65519, 00:07:03.885 "max_namespaces": 32, 00:07:03.885 "min_cntlid": 1, 00:07:03.885 "model_number": "SPDK bdev Controller", 00:07:03.885 "namespaces": [ 00:07:03.885 { 00:07:03.885 "bdev_name": "Null2", 00:07:03.885 "name": "Null2", 00:07:03.885 "nguid": "5E4018DC6E6C4347AC2080BBBD699310", 00:07:03.885 "nsid": 1, 00:07:03.885 "uuid": "5e4018dc-6e6c-4347-ac20-80bbbd699310" 00:07:03.885 } 00:07:03.885 ], 00:07:03.885 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:03.885 "serial_number": "SPDK00000000000002", 00:07:03.885 "subtype": "NVMe" 00:07:03.885 }, 00:07:03.885 { 00:07:03.885 "allow_any_host": true, 00:07:03.885 "hosts": [], 00:07:03.885 "listen_addresses": [ 00:07:03.885 { 00:07:03.885 "adrfam": "IPv4", 00:07:03.885 "traddr": "10.0.0.2", 00:07:03.885 "trsvcid": "4420", 00:07:03.885 "trtype": "TCP" 00:07:03.885 } 00:07:03.885 ], 00:07:03.885 "max_cntlid": 65519, 00:07:03.885 "max_namespaces": 32, 00:07:03.885 "min_cntlid": 1, 00:07:03.885 "model_number": "SPDK bdev Controller", 00:07:03.885 "namespaces": [ 00:07:03.885 { 00:07:03.885 "bdev_name": "Null3", 00:07:03.885 "name": "Null3", 00:07:03.885 "nguid": "20D1D54E989748088D3F8F2C99551E10", 00:07:03.885 "nsid": 1, 00:07:03.885 "uuid": "20d1d54e-9897-4808-8d3f-8f2c99551e10" 00:07:03.885 } 00:07:03.885 ], 00:07:03.885 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:03.885 "serial_number": "SPDK00000000000003", 00:07:03.885 "subtype": "NVMe" 00:07:03.885 }, 00:07:03.885 { 00:07:03.885 "allow_any_host": true, 00:07:03.885 "hosts": [], 00:07:03.885 "listen_addresses": [ 00:07:03.885 { 00:07:03.885 "adrfam": "IPv4", 00:07:03.885 "traddr": "10.0.0.2", 00:07:03.885 "trsvcid": "4420", 00:07:03.885 "trtype": "TCP" 00:07:03.886 } 00:07:03.886 ], 00:07:03.886 "max_cntlid": 65519, 00:07:03.886 "max_namespaces": 32, 00:07:03.886 "min_cntlid": 1, 00:07:03.886 "model_number": "SPDK bdev Controller", 00:07:03.886 "namespaces": [ 00:07:03.886 { 00:07:03.886 "bdev_name": "Null4", 00:07:03.886 "name": "Null4", 00:07:03.886 "nguid": "A47BFF711A904C6CBAB3C561B7A77768", 00:07:03.886 "nsid": 1, 00:07:03.886 "uuid": "a47bff71-1a90-4c6c-bab3-c561b7a77768" 00:07:03.886 } 00:07:03.886 ], 00:07:03.886 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:03.886 "serial_number": "SPDK00000000000004", 00:07:03.886 "subtype": "NVMe" 00:07:03.886 } 00:07:03.886 ] 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:04.145 rmmod nvme_tcp 00:07:04.145 rmmod nvme_fabrics 00:07:04.145 rmmod nvme_keyring 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 65897 ']' 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 65897 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 65897 ']' 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 65897 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65897 00:07:04.145 killing process with pid 65897 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65897' 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 65897 00:07:04.145 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 65897 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:04.404 00:07:04.404 real 0m2.561s 00:07:04.404 user 0m6.404s 00:07:04.404 sys 0m0.780s 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.404 ************************************ 00:07:04.404 END TEST nvmf_target_discovery 00:07:04.404 ************************************ 00:07:04.404 18:26:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:04.404 18:26:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:04.404 18:26:26 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:04.404 18:26:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:04.404 18:26:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.404 18:26:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.404 ************************************ 00:07:04.404 START TEST nvmf_referrals 00:07:04.404 ************************************ 00:07:04.404 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:04.664 * Looking for test storage... 00:07:04.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:04.664 Cannot find device "nvmf_tgt_br" 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:04.664 Cannot find device "nvmf_tgt_br2" 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:04.664 Cannot find device "nvmf_tgt_br" 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:07:04.664 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:04.925 Cannot find device "nvmf_tgt_br2" 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:04.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:04.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:04.925 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:04.926 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:04.926 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:04.926 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:05.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:07:05.184 00:07:05.184 --- 10.0.0.2 ping statistics --- 00:07:05.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.184 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:05.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:05.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:07:05.184 00:07:05.184 --- 10.0.0.3 ping statistics --- 00:07:05.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.184 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:05.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:05.184 00:07:05.184 --- 10.0.0.1 ping statistics --- 00:07:05.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.184 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66129 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66129 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:05.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66129 ']' 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.184 18:26:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.184 [2024-07-15 18:26:27.723706] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:07:05.184 [2024-07-15 18:26:27.723784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.443 [2024-07-15 18:26:27.854874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.444 [2024-07-15 18:26:27.954047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.444 [2024-07-15 18:26:27.954266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.444 [2024-07-15 18:26:27.954359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.444 [2024-07-15 18:26:27.954409] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.444 [2024-07-15 18:26:27.954437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.444 [2024-07-15 18:26:27.954640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.444 [2024-07-15 18:26:27.954668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.444 [2024-07-15 18:26:27.955663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.444 [2024-07-15 18:26:27.955669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.380 [2024-07-15 18:26:28.726275] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.380 [2024-07-15 18:26:28.751662] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:06.380 18:26:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.639 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:06.640 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.898 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:06.898 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:06.898 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.899 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:07.158 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.159 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:07.159 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:07.417 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:07.417 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:07.417 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:07.417 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.417 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:07.417 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:07.417 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:07.417 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.417 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.417 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:07.418 18:26:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:07.418 18:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:07.418 18:26:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:07.418 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.418 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.676 rmmod nvme_tcp 00:07:07.676 rmmod nvme_fabrics 00:07:07.676 rmmod nvme_keyring 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66129 ']' 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66129 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66129 ']' 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66129 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66129 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66129' 00:07:07.676 killing process with pid 66129 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66129 00:07:07.676 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66129 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:07.934 00:07:07.934 real 0m3.496s 00:07:07.934 user 0m10.738s 00:07:07.934 sys 0m1.188s 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.934 18:26:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.934 ************************************ 00:07:07.934 END TEST nvmf_referrals 00:07:07.934 ************************************ 00:07:08.194 18:26:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:08.194 18:26:30 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:08.194 18:26:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.194 18:26:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.194 18:26:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.194 ************************************ 00:07:08.194 START TEST nvmf_connect_disconnect 00:07:08.194 ************************************ 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:08.194 * Looking for test storage... 00:07:08.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:08.194 Cannot find device "nvmf_tgt_br" 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:08.194 Cannot find device "nvmf_tgt_br2" 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:07:08.194 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:08.453 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:08.453 Cannot find device "nvmf_tgt_br" 00:07:08.453 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:07:08.453 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:08.453 Cannot find device "nvmf_tgt_br2" 00:07:08.453 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:08.453 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:08.453 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:08.454 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:08.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.454 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:08.454 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:08.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.454 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:08.454 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:08.454 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:08.454 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:08.454 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:08.454 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:08.454 18:26:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:08.454 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:08.454 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:08.454 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:08.454 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:08.454 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:08.454 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:08.454 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:08.454 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:08.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:07:08.714 00:07:08.714 --- 10.0.0.2 ping statistics --- 00:07:08.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.714 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:08.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:08.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:07:08.714 00:07:08.714 --- 10.0.0.3 ping statistics --- 00:07:08.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.714 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:08.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:07:08.714 00:07:08.714 --- 10.0.0.1 ping statistics --- 00:07:08.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.714 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66439 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66439 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66439 ']' 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:08.714 18:26:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:08.714 [2024-07-15 18:26:31.299074] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:07:08.714 [2024-07-15 18:26:31.299146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.973 [2024-07-15 18:26:31.442875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.973 [2024-07-15 18:26:31.544306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.973 [2024-07-15 18:26:31.544348] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.973 [2024-07-15 18:26:31.544358] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.973 [2024-07-15 18:26:31.544367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.973 [2024-07-15 18:26:31.544375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.973 [2024-07-15 18:26:31.544615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.973 [2024-07-15 18:26:31.544715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.974 [2024-07-15 18:26:31.545451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.974 [2024-07-15 18:26:31.545444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.603 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.603 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:09.603 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.603 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:09.603 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:09.862 [2024-07-15 18:26:32.265230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:09.862 [2024-07-15 18:26:32.350061] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:09.862 18:26:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:12.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.283 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:21.283 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:21.283 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:21.283 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:21.283 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:21.283 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:21.283 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:21.283 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:21.283 rmmod nvme_tcp 00:07:21.541 rmmod nvme_fabrics 00:07:21.541 rmmod nvme_keyring 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66439 ']' 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66439 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66439 ']' 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66439 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66439 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.541 killing process with pid 66439 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66439' 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66439 00:07:21.541 18:26:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66439 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:21.799 00:07:21.799 real 0m13.724s 00:07:21.799 user 0m49.035s 00:07:21.799 sys 0m2.707s 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.799 18:26:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:21.799 ************************************ 00:07:21.799 END TEST nvmf_connect_disconnect 00:07:21.799 ************************************ 00:07:21.799 18:26:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:21.799 18:26:44 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:21.799 18:26:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.799 18:26:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.799 18:26:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.799 ************************************ 00:07:21.799 START TEST nvmf_multitarget 00:07:21.799 ************************************ 00:07:21.799 18:26:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:22.058 * Looking for test storage... 00:07:22.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:22.058 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:22.059 Cannot find device "nvmf_tgt_br" 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:22.059 Cannot find device "nvmf_tgt_br2" 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:22.059 Cannot find device "nvmf_tgt_br" 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:22.059 Cannot find device "nvmf_tgt_br2" 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:07:22.059 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:22.317 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:22.317 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:22.317 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:22.575 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:22.575 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:22.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:07:22.576 00:07:22.576 --- 10.0.0.2 ping statistics --- 00:07:22.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.576 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:22.576 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:22.576 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:07:22.576 00:07:22.576 --- 10.0.0.3 ping statistics --- 00:07:22.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.576 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:22.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:07:22.576 00:07:22.576 --- 10.0.0.1 ping statistics --- 00:07:22.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.576 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.576 18:26:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=66843 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 66843 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 66843 ']' 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.576 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:22.576 [2024-07-15 18:26:45.077321] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:07:22.576 [2024-07-15 18:26:45.077399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.860 [2024-07-15 18:26:45.220614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.860 [2024-07-15 18:26:45.315255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.860 [2024-07-15 18:26:45.315304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.860 [2024-07-15 18:26:45.315313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.860 [2024-07-15 18:26:45.315322] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.860 [2024-07-15 18:26:45.315329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.860 [2024-07-15 18:26:45.316180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.860 [2024-07-15 18:26:45.316280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.860 [2024-07-15 18:26:45.316333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.860 [2024-07-15 18:26:45.316336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.426 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.426 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:23.426 18:26:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.426 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.426 18:26:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:23.426 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.426 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:23.426 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:23.426 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:23.684 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:23.684 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:23.684 "nvmf_tgt_1" 00:07:23.684 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:23.942 "nvmf_tgt_2" 00:07:23.942 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:23.942 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:23.942 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:23.942 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:23.942 true 00:07:24.199 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:24.199 true 00:07:24.199 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:24.199 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:24.199 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:24.199 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:24.199 18:26:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:24.199 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:24.199 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:24.457 rmmod nvme_tcp 00:07:24.457 rmmod nvme_fabrics 00:07:24.457 rmmod nvme_keyring 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 66843 ']' 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 66843 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 66843 ']' 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 66843 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66843 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.457 killing process with pid 66843 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66843' 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 66843 00:07:24.457 18:26:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 66843 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:24.715 00:07:24.715 real 0m2.809s 00:07:24.715 user 0m8.159s 00:07:24.715 sys 0m0.866s 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.715 18:26:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:24.715 ************************************ 00:07:24.715 END TEST nvmf_multitarget 00:07:24.715 ************************************ 00:07:24.715 18:26:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:24.715 18:26:47 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:24.715 18:26:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.715 18:26:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.715 18:26:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.715 ************************************ 00:07:24.715 START TEST nvmf_rpc 00:07:24.715 ************************************ 00:07:24.715 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:24.972 * Looking for test storage... 00:07:24.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:24.972 18:26:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:24.973 Cannot find device "nvmf_tgt_br" 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:24.973 Cannot find device "nvmf_tgt_br2" 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:24.973 Cannot find device "nvmf_tgt_br" 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:24.973 Cannot find device "nvmf_tgt_br2" 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:24.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:24.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:07:24.973 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:25.230 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:25.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:07:25.531 00:07:25.531 --- 10.0.0.2 ping statistics --- 00:07:25.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.531 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:25.531 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:25.531 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:07:25.531 00:07:25.531 --- 10.0.0.3 ping statistics --- 00:07:25.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.531 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:25.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:25.531 00:07:25.531 --- 10.0.0.1 ping statistics --- 00:07:25.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.531 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67075 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67075 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67075 ']' 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.531 18:26:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.531 [2024-07-15 18:26:47.960708] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:07:25.531 [2024-07-15 18:26:47.960791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.531 [2024-07-15 18:26:48.105167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.807 [2024-07-15 18:26:48.202675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.807 [2024-07-15 18:26:48.202744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.807 [2024-07-15 18:26:48.202754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.807 [2024-07-15 18:26:48.202762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.807 [2024-07-15 18:26:48.202769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.807 [2024-07-15 18:26:48.202976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.807 [2024-07-15 18:26:48.203167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.807 [2024-07-15 18:26:48.204022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.807 [2024-07-15 18:26:48.204023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:26.375 "poll_groups": [ 00:07:26.375 { 00:07:26.375 "admin_qpairs": 0, 00:07:26.375 "completed_nvme_io": 0, 00:07:26.375 "current_admin_qpairs": 0, 00:07:26.375 "current_io_qpairs": 0, 00:07:26.375 "io_qpairs": 0, 00:07:26.375 "name": "nvmf_tgt_poll_group_000", 00:07:26.375 "pending_bdev_io": 0, 00:07:26.375 "transports": [] 00:07:26.375 }, 00:07:26.375 { 00:07:26.375 "admin_qpairs": 0, 00:07:26.375 "completed_nvme_io": 0, 00:07:26.375 "current_admin_qpairs": 0, 00:07:26.375 "current_io_qpairs": 0, 00:07:26.375 "io_qpairs": 0, 00:07:26.375 "name": "nvmf_tgt_poll_group_001", 00:07:26.375 "pending_bdev_io": 0, 00:07:26.375 "transports": [] 00:07:26.375 }, 00:07:26.375 { 00:07:26.375 "admin_qpairs": 0, 00:07:26.375 "completed_nvme_io": 0, 00:07:26.375 "current_admin_qpairs": 0, 00:07:26.375 "current_io_qpairs": 0, 00:07:26.375 "io_qpairs": 0, 00:07:26.375 "name": "nvmf_tgt_poll_group_002", 00:07:26.375 "pending_bdev_io": 0, 00:07:26.375 "transports": [] 00:07:26.375 }, 00:07:26.375 { 00:07:26.375 "admin_qpairs": 0, 00:07:26.375 "completed_nvme_io": 0, 00:07:26.375 "current_admin_qpairs": 0, 00:07:26.375 "current_io_qpairs": 0, 00:07:26.375 "io_qpairs": 0, 00:07:26.375 "name": "nvmf_tgt_poll_group_003", 00:07:26.375 "pending_bdev_io": 0, 00:07:26.375 "transports": [] 00:07:26.375 } 00:07:26.375 ], 00:07:26.375 "tick_rate": 2490000000 00:07:26.375 }' 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:26.375 18:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:26.634 18:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.634 [2024-07-15 18:26:49.060196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:26.634 "poll_groups": [ 00:07:26.634 { 00:07:26.634 "admin_qpairs": 0, 00:07:26.634 "completed_nvme_io": 0, 00:07:26.634 "current_admin_qpairs": 0, 00:07:26.634 "current_io_qpairs": 0, 00:07:26.634 "io_qpairs": 0, 00:07:26.634 "name": "nvmf_tgt_poll_group_000", 00:07:26.634 "pending_bdev_io": 0, 00:07:26.634 "transports": [ 00:07:26.634 { 00:07:26.634 "trtype": "TCP" 00:07:26.634 } 00:07:26.634 ] 00:07:26.634 }, 00:07:26.634 { 00:07:26.634 "admin_qpairs": 0, 00:07:26.634 "completed_nvme_io": 0, 00:07:26.634 "current_admin_qpairs": 0, 00:07:26.634 "current_io_qpairs": 0, 00:07:26.634 "io_qpairs": 0, 00:07:26.634 "name": "nvmf_tgt_poll_group_001", 00:07:26.634 "pending_bdev_io": 0, 00:07:26.634 "transports": [ 00:07:26.634 { 00:07:26.634 "trtype": "TCP" 00:07:26.634 } 00:07:26.634 ] 00:07:26.634 }, 00:07:26.634 { 00:07:26.634 "admin_qpairs": 0, 00:07:26.634 "completed_nvme_io": 0, 00:07:26.634 "current_admin_qpairs": 0, 00:07:26.634 "current_io_qpairs": 0, 00:07:26.634 "io_qpairs": 0, 00:07:26.634 "name": "nvmf_tgt_poll_group_002", 00:07:26.634 "pending_bdev_io": 0, 00:07:26.634 "transports": [ 00:07:26.634 { 00:07:26.634 "trtype": "TCP" 00:07:26.634 } 00:07:26.634 ] 00:07:26.634 }, 00:07:26.634 { 00:07:26.634 "admin_qpairs": 0, 00:07:26.634 "completed_nvme_io": 0, 00:07:26.634 "current_admin_qpairs": 0, 00:07:26.634 "current_io_qpairs": 0, 00:07:26.634 "io_qpairs": 0, 00:07:26.634 "name": "nvmf_tgt_poll_group_003", 00:07:26.634 "pending_bdev_io": 0, 00:07:26.634 "transports": [ 00:07:26.634 { 00:07:26.634 "trtype": "TCP" 00:07:26.634 } 00:07:26.634 ] 00:07:26.634 } 00:07:26.634 ], 00:07:26.634 "tick_rate": 2490000000 00:07:26.634 }' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.634 Malloc1 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.634 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.893 [2024-07-15 18:26:49.280724] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -a 10.0.0.2 -s 4420 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -a 10.0.0.2 -s 4420 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -a 10.0.0.2 -s 4420 00:07:26.893 [2024-07-15 18:26:49.306965] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6' 00:07:26.893 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:26.893 could not add new controller: failed to write to nvme-fabrics device 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:26.893 18:26:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:29.449 18:26:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.450 [2024-07-15 18:26:51.664226] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6' 00:07:29.450 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:29.450 could not add new controller: failed to write to nvme-fabrics device 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:29.450 18:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.350 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.658 [2024-07-15 18:26:53.984217] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.658 18:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.658 18:26:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.658 18:26:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.658 18:26:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:31.658 18:26:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:31.658 18:26:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:31.658 18:26:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:31.658 18:26:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.185 [2024-07-15 18:26:56.327446] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:34.185 18:26:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:36.087 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:36.087 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:36.087 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:36.087 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:36.087 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:36.087 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:36.087 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:36.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.087 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:36.087 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:36.087 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.346 [2024-07-15 18:26:58.762882] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.346 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:36.605 18:26:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:36.605 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:36.605 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:36.605 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:36.605 18:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:38.510 18:27:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:38.510 18:27:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:38.510 18:27:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:38.510 18:27:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:38.510 18:27:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:38.510 18:27:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:38.510 18:27:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:38.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.510 18:27:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:38.510 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:38.510 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:38.510 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.768 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:38.768 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.768 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:38.768 18:27:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.768 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.768 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.768 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.768 18:27:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.768 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.768 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.769 [2024-07-15 18:27:01.190811] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.769 18:27:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:39.028 18:27:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:39.028 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:39.028 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:39.028 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:39.028 18:27:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:40.959 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:40.959 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:40.959 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.959 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:40.959 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.959 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:40.959 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:41.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.229 [2024-07-15 18:27:03.618751] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:41.229 18:27:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:43.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 [2024-07-15 18:27:05.974014] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 [2024-07-15 18:27:06.045964] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 [2024-07-15 18:27:06.113917] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 [2024-07-15 18:27:06.185861] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:43.765 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.766 [2024-07-15 18:27:06.245814] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:43.766 "poll_groups": [ 00:07:43.766 { 00:07:43.766 "admin_qpairs": 2, 00:07:43.766 "completed_nvme_io": 67, 00:07:43.766 "current_admin_qpairs": 0, 00:07:43.766 "current_io_qpairs": 0, 00:07:43.766 "io_qpairs": 16, 00:07:43.766 "name": "nvmf_tgt_poll_group_000", 00:07:43.766 "pending_bdev_io": 0, 00:07:43.766 "transports": [ 00:07:43.766 { 00:07:43.766 "trtype": "TCP" 00:07:43.766 } 00:07:43.766 ] 00:07:43.766 }, 00:07:43.766 { 00:07:43.766 "admin_qpairs": 3, 00:07:43.766 "completed_nvme_io": 117, 00:07:43.766 "current_admin_qpairs": 0, 00:07:43.766 "current_io_qpairs": 0, 00:07:43.766 "io_qpairs": 17, 00:07:43.766 "name": "nvmf_tgt_poll_group_001", 00:07:43.766 "pending_bdev_io": 0, 00:07:43.766 "transports": [ 00:07:43.766 { 00:07:43.766 "trtype": "TCP" 00:07:43.766 } 00:07:43.766 ] 00:07:43.766 }, 00:07:43.766 { 00:07:43.766 "admin_qpairs": 1, 00:07:43.766 "completed_nvme_io": 84, 00:07:43.766 "current_admin_qpairs": 0, 00:07:43.766 "current_io_qpairs": 0, 00:07:43.766 "io_qpairs": 19, 00:07:43.766 "name": "nvmf_tgt_poll_group_002", 00:07:43.766 "pending_bdev_io": 0, 00:07:43.766 "transports": [ 00:07:43.766 { 00:07:43.766 "trtype": "TCP" 00:07:43.766 } 00:07:43.766 ] 00:07:43.766 }, 00:07:43.766 { 00:07:43.766 "admin_qpairs": 1, 00:07:43.766 "completed_nvme_io": 152, 00:07:43.766 "current_admin_qpairs": 0, 00:07:43.766 "current_io_qpairs": 0, 00:07:43.766 "io_qpairs": 18, 00:07:43.766 "name": "nvmf_tgt_poll_group_003", 00:07:43.766 "pending_bdev_io": 0, 00:07:43.766 "transports": [ 00:07:43.766 { 00:07:43.766 "trtype": "TCP" 00:07:43.766 } 00:07:43.766 ] 00:07:43.766 } 00:07:43.766 ], 00:07:43.766 "tick_rate": 2490000000 00:07:43.766 }' 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:43.766 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.024 rmmod nvme_tcp 00:07:44.024 rmmod nvme_fabrics 00:07:44.024 rmmod nvme_keyring 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67075 ']' 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67075 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67075 ']' 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67075 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67075 00:07:44.024 killing process with pid 67075 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.024 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67075' 00:07:44.025 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67075 00:07:44.025 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67075 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:44.283 00:07:44.283 real 0m19.570s 00:07:44.283 user 1m12.027s 00:07:44.283 sys 0m3.858s 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.283 18:27:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.283 ************************************ 00:07:44.283 END TEST nvmf_rpc 00:07:44.283 ************************************ 00:07:44.283 18:27:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:44.283 18:27:06 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:44.283 18:27:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:44.283 18:27:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.283 18:27:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:44.283 ************************************ 00:07:44.283 START TEST nvmf_invalid 00:07:44.283 ************************************ 00:07:44.283 18:27:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:44.541 * Looking for test storage... 00:07:44.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:44.541 18:27:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:44.541 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:44.542 Cannot find device "nvmf_tgt_br" 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:44.542 Cannot find device "nvmf_tgt_br2" 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:44.542 Cannot find device "nvmf_tgt_br" 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:07:44.542 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:44.801 Cannot find device "nvmf_tgt_br2" 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:44.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:44.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:44.801 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:45.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:07:45.061 00:07:45.061 --- 10.0.0.2 ping statistics --- 00:07:45.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.061 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:45.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:45.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:07:45.061 00:07:45.061 --- 10.0.0.3 ping statistics --- 00:07:45.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.061 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:45.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:07:45.061 00:07:45.061 --- 10.0.0.1 ping statistics --- 00:07:45.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.061 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67591 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67591 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67591 ']' 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.061 18:27:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.062 18:27:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:45.062 [2024-07-15 18:27:07.586536] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:07:45.062 [2024-07-15 18:27:07.586619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.321 [2024-07-15 18:27:07.733753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.321 [2024-07-15 18:27:07.837846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.321 [2024-07-15 18:27:07.837902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.321 [2024-07-15 18:27:07.837914] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.321 [2024-07-15 18:27:07.837924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.322 [2024-07-15 18:27:07.837932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.322 [2024-07-15 18:27:07.838027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.322 [2024-07-15 18:27:07.838412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.322 [2024-07-15 18:27:07.838681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.322 [2024-07-15 18:27:07.838619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.258 18:27:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.258 18:27:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:46.258 18:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.258 18:27:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.258 18:27:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:46.258 18:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.258 18:27:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:46.258 18:27:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28971 00:07:46.258 [2024-07-15 18:27:08.837586] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:46.518 18:27:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 18:27:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28971 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:46.518 request: 00:07:46.518 { 00:07:46.518 "method": "nvmf_create_subsystem", 00:07:46.518 "params": { 00:07:46.518 "nqn": "nqn.2016-06.io.spdk:cnode28971", 00:07:46.518 "tgt_name": "foobar" 00:07:46.518 } 00:07:46.518 } 00:07:46.518 Got JSON-RPC error response 00:07:46.518 GoRPCClient: error on JSON-RPC call' 00:07:46.518 18:27:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 18:27:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28971 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:46.518 request: 00:07:46.518 { 00:07:46.518 "method": "nvmf_create_subsystem", 00:07:46.518 "params": { 00:07:46.518 "nqn": "nqn.2016-06.io.spdk:cnode28971", 00:07:46.518 "tgt_name": "foobar" 00:07:46.518 } 00:07:46.518 } 00:07:46.518 Got JSON-RPC error response 00:07:46.518 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:46.518 18:27:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:46.518 18:27:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30965 00:07:46.518 [2024-07-15 18:27:09.125778] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30965: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:46.777 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 18:27:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30965 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:46.777 request: 00:07:46.777 { 00:07:46.777 "method": "nvmf_create_subsystem", 00:07:46.777 "params": { 00:07:46.777 "nqn": "nqn.2016-06.io.spdk:cnode30965", 00:07:46.777 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:46.777 } 00:07:46.777 } 00:07:46.777 Got JSON-RPC error response 00:07:46.777 GoRPCClient: error on JSON-RPC call' 00:07:46.777 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 18:27:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30965 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:46.777 request: 00:07:46.777 { 00:07:46.777 "method": "nvmf_create_subsystem", 00:07:46.777 "params": { 00:07:46.777 "nqn": "nqn.2016-06.io.spdk:cnode30965", 00:07:46.777 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:46.777 } 00:07:46.777 } 00:07:46.777 Got JSON-RPC error response 00:07:46.777 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:46.777 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:46.777 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28469 00:07:46.777 [2024-07-15 18:27:09.361903] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28469: invalid model number 'SPDK_Controller' 00:07:46.777 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 18:27:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode28469], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:46.777 request: 00:07:46.777 { 00:07:46.777 "method": "nvmf_create_subsystem", 00:07:46.777 "params": { 00:07:46.777 "nqn": "nqn.2016-06.io.spdk:cnode28469", 00:07:46.777 "model_number": "SPDK_Controller\u001f" 00:07:46.777 } 00:07:46.777 } 00:07:46.777 Got JSON-RPC error response 00:07:46.777 GoRPCClient: error on JSON-RPC call' 00:07:46.777 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 18:27:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode28469], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:46.777 request: 00:07:46.777 { 00:07:46.777 "method": "nvmf_create_subsystem", 00:07:46.777 "params": { 00:07:46.777 "nqn": "nqn.2016-06.io.spdk:cnode28469", 00:07:46.777 "model_number": "SPDK_Controller\u001f" 00:07:46.777 } 00:07:46.777 } 00:07:46.777 Got JSON-RPC error response 00:07:46.777 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:07:47.036 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '9Wxo_ccJmAa?T31'\''^=7q(' 00:07:47.037 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '9Wxo_ccJmAa?T31'\''^=7q(' nqn.2016-06.io.spdk:cnode10825 00:07:47.296 [2024-07-15 18:27:09.741958] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10825: invalid serial number '9Wxo_ccJmAa?T31'^=7q(' 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/15 18:27:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10825 serial_number:9Wxo_ccJmAa?T31'\''^=7q(], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 9Wxo_ccJmAa?T31'\''^=7q( 00:07:47.296 request: 00:07:47.296 { 00:07:47.296 "method": "nvmf_create_subsystem", 00:07:47.296 "params": { 00:07:47.296 "nqn": "nqn.2016-06.io.spdk:cnode10825", 00:07:47.296 "serial_number": "9Wxo_ccJmAa?T31'\''^=7q(" 00:07:47.296 } 00:07:47.296 } 00:07:47.296 Got JSON-RPC error response 00:07:47.296 GoRPCClient: error on JSON-RPC call' 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/15 18:27:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10825 serial_number:9Wxo_ccJmAa?T31'^=7q(], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 9Wxo_ccJmAa?T31'^=7q( 00:07:47.296 request: 00:07:47.296 { 00:07:47.296 "method": "nvmf_create_subsystem", 00:07:47.296 "params": { 00:07:47.296 "nqn": "nqn.2016-06.io.spdk:cnode10825", 00:07:47.296 "serial_number": "9Wxo_ccJmAa?T31'^=7q(" 00:07:47.296 } 00:07:47.296 } 00:07:47.296 Got JSON-RPC error response 00:07:47.296 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:47.296 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.297 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:47.556 18:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:47.556 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:47.556 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:47.556 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:47.556 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:47.556 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.556 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.556 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:07:47.557 18:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '`92Iafv<>{a@q{^M1|{a@q{^M1|{a@q{^M1|{a@q{^M1|{a@q{^M1|{a@q{^M1| /dev/null' 00:07:50.135 18:27:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.135 18:27:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:50.135 ************************************ 00:07:50.135 END TEST nvmf_invalid 00:07:50.135 ************************************ 00:07:50.135 00:07:50.135 real 0m5.809s 00:07:50.135 user 0m22.108s 00:07:50.135 sys 0m1.593s 00:07:50.136 18:27:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.136 18:27:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:50.403 18:27:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:50.403 18:27:12 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:50.403 18:27:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.403 18:27:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.403 18:27:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.403 ************************************ 00:07:50.403 START TEST nvmf_abort 00:07:50.403 ************************************ 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:50.403 * Looking for test storage... 00:07:50.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.403 18:27:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:50.404 Cannot find device "nvmf_tgt_br" 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:50.404 Cannot find device "nvmf_tgt_br2" 00:07:50.404 18:27:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:07:50.404 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:50.404 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:50.673 Cannot find device "nvmf_tgt_br" 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:50.673 Cannot find device "nvmf_tgt_br2" 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:50.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:50.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:50.673 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:50.674 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:50.940 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:50.940 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:50.940 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:50.940 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:50.940 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:50.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:07:50.940 00:07:50.940 --- 10.0.0.2 ping statistics --- 00:07:50.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.940 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:07:50.940 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:50.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:50.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:07:50.940 00:07:50.940 --- 10.0.0.3 ping statistics --- 00:07:50.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.940 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:07:50.940 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:50.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:50.940 00:07:50.940 --- 10.0.0.1 ping statistics --- 00:07:50.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.940 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:50.940 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.940 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68100 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68100 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68100 ']' 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.941 18:27:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:50.941 [2024-07-15 18:27:13.483700] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:07:50.941 [2024-07-15 18:27:13.483772] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.198 [2024-07-15 18:27:13.623703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.198 [2024-07-15 18:27:13.733283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.198 [2024-07-15 18:27:13.733494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.198 [2024-07-15 18:27:13.733720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.198 [2024-07-15 18:27:13.733781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.198 [2024-07-15 18:27:13.733871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.198 [2024-07-15 18:27:13.734031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.198 [2024-07-15 18:27:13.734997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.198 [2024-07-15 18:27:13.734997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.129 [2024-07-15 18:27:14.434725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.129 Malloc0 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.129 Delay0 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.129 [2024-07-15 18:27:14.520454] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.129 18:27:14 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:52.129 [2024-07-15 18:27:14.717784] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:54.655 Initializing NVMe Controllers 00:07:54.655 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:54.655 controller IO queue size 128 less than required 00:07:54.655 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:54.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:54.655 Initialization complete. Launching workers. 00:07:54.655 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 38924 00:07:54.655 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38986, failed to submit 62 00:07:54.655 success 38928, unsuccess 58, failed 0 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.655 rmmod nvme_tcp 00:07:54.655 rmmod nvme_fabrics 00:07:54.655 rmmod nvme_keyring 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68100 ']' 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68100 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68100 ']' 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68100 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68100 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:54.655 killing process with pid 68100 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68100' 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68100 00:07:54.655 18:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68100 00:07:54.655 18:27:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.655 18:27:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:54.655 18:27:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:54.655 18:27:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.655 18:27:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.655 18:27:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.655 18:27:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.655 18:27:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.913 18:27:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:54.913 00:07:54.913 real 0m4.522s 00:07:54.913 user 0m12.215s 00:07:54.913 sys 0m1.310s 00:07:54.913 18:27:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.913 18:27:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:54.913 ************************************ 00:07:54.913 END TEST nvmf_abort 00:07:54.913 ************************************ 00:07:54.913 18:27:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:54.913 18:27:17 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:54.913 18:27:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.913 18:27:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.913 18:27:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.913 ************************************ 00:07:54.913 START TEST nvmf_ns_hotplug_stress 00:07:54.913 ************************************ 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:54.913 * Looking for test storage... 00:07:54.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.913 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:55.172 Cannot find device "nvmf_tgt_br" 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:55.172 Cannot find device "nvmf_tgt_br2" 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:55.172 Cannot find device "nvmf_tgt_br" 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:55.172 Cannot find device "nvmf_tgt_br2" 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:55.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:55.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:55.172 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:55.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:07:55.430 00:07:55.430 --- 10.0.0.2 ping statistics --- 00:07:55.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.430 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:55.430 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:55.430 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:07:55.430 00:07:55.430 --- 10.0.0.3 ping statistics --- 00:07:55.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.430 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:55.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:07:55.430 00:07:55.430 --- 10.0.0.1 ping statistics --- 00:07:55.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.430 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:55.430 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68361 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68361 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68361 ']' 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.431 18:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:55.688 [2024-07-15 18:27:18.045185] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:07:55.688 [2024-07-15 18:27:18.045256] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.688 [2024-07-15 18:27:18.187510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.688 [2024-07-15 18:27:18.284395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.688 [2024-07-15 18:27:18.284448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.688 [2024-07-15 18:27:18.284457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.688 [2024-07-15 18:27:18.284465] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.688 [2024-07-15 18:27:18.284472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.688 [2024-07-15 18:27:18.284671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.688 [2024-07-15 18:27:18.284820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.688 [2024-07-15 18:27:18.284820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.621 18:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.621 18:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:56.621 18:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.621 18:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.621 18:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.621 18:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.621 18:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:56.621 18:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:56.621 [2024-07-15 18:27:19.178734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.621 18:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:56.879 18:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.137 [2024-07-15 18:27:19.578507] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.137 18:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.395 18:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:57.395 Malloc0 00:07:57.653 18:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:57.653 Delay0 00:07:57.653 18:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.911 18:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:58.169 NULL1 00:07:58.169 18:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:58.425 18:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68491 00:07:58.425 18:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:58.426 18:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:07:58.426 18:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.798 Read completed with error (sct=0, sc=11) 00:07:59.799 18:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.799 18:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:59.799 18:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:00.055 true 00:08:00.055 18:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:00.055 18:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.982 18:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.982 18:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:00.982 18:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:01.239 true 00:08:01.239 18:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:01.239 18:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.497 18:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.754 18:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:01.754 18:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:02.011 true 00:08:02.011 18:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:02.011 18:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.976 18:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.976 18:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:02.976 18:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:03.233 true 00:08:03.233 18:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:03.233 18:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.490 18:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.746 18:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:03.746 18:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:03.746 true 00:08:04.003 18:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:04.003 18:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.934 18:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.934 18:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:04.934 18:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:05.189 true 00:08:05.189 18:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:05.189 18:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.445 18:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.701 18:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:05.701 18:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:05.959 true 00:08:05.959 18:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:05.959 18:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.911 18:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.169 18:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:07.169 18:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:07.169 true 00:08:07.169 18:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:07.169 18:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.426 18:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.683 18:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:07.683 18:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:07.940 true 00:08:07.940 18:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:07.940 18:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.898 18:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.156 18:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:09.156 18:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:09.414 true 00:08:09.414 18:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:09.414 18:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.672 18:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.929 18:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:09.929 18:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:09.929 true 00:08:09.929 18:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:09.929 18:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.862 18:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.119 18:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:11.120 18:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:11.377 true 00:08:11.377 18:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:11.377 18:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.636 18:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.895 18:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:11.895 18:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:12.153 true 00:08:12.153 18:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:12.153 18:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.410 18:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.668 18:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:12.668 18:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:12.926 true 00:08:12.926 18:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:12.926 18:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.860 18:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.118 18:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:14.118 18:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:14.118 true 00:08:14.377 18:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:14.377 18:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.636 18:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.896 18:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:14.896 18:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:14.896 true 00:08:15.155 18:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:15.155 18:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.094 18:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.094 18:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:16.094 18:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:16.353 true 00:08:16.353 18:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:16.353 18:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.612 18:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.612 18:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:16.612 18:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:16.871 true 00:08:16.871 18:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:16.871 18:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.808 18:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.068 18:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:18.068 18:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:18.327 true 00:08:18.327 18:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:18.327 18:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.586 18:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.845 18:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:18.845 18:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:19.105 true 00:08:19.105 18:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:19.105 18:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.048 18:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.048 18:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:20.048 18:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:20.307 true 00:08:20.307 18:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:20.307 18:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.566 18:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.824 18:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:20.824 18:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:20.824 true 00:08:21.083 18:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:21.083 18:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.018 18:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.018 18:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:22.018 18:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:22.277 true 00:08:22.277 18:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:22.277 18:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.535 18:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.794 18:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:22.794 18:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:23.053 true 00:08:23.053 18:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:23.053 18:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.989 18:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.248 18:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:24.248 18:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:24.248 true 00:08:24.248 18:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:24.248 18:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.506 18:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.764 18:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:24.764 18:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:25.022 true 00:08:25.022 18:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:25.022 18:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.955 18:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.214 18:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:26.214 18:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:26.473 true 00:08:26.473 18:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:26.473 18:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.732 18:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.732 18:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:26.732 18:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:26.990 true 00:08:26.990 18:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:26.990 18:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.924 18:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.183 18:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:28.183 18:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:28.443 true 00:08:28.443 18:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:28.443 18:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.443 Initializing NVMe Controllers 00:08:28.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:28.443 Controller IO queue size 128, less than required. 00:08:28.443 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:28.443 Controller IO queue size 128, less than required. 00:08:28.443 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:28.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:28.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:28.443 Initialization complete. Launching workers. 00:08:28.443 ======================================================== 00:08:28.443 Latency(us) 00:08:28.443 Device Information : IOPS MiB/s Average min max 00:08:28.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 268.56 0.13 256699.34 4890.17 1049475.05 00:08:28.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13152.28 6.42 9732.59 3102.06 523862.81 00:08:28.443 ======================================================== 00:08:28.443 Total : 13420.84 6.55 14674.53 3102.06 1049475.05 00:08:28.443 00:08:28.702 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.702 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:28.702 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:28.962 true 00:08:28.962 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68491 00:08:28.962 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68491) - No such process 00:08:28.962 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68491 00:08:28.962 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.221 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.480 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:29.480 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:29.480 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:29.480 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.480 18:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:29.740 null0 00:08:29.740 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:29.740 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.740 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:29.999 null1 00:08:29.999 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:29.999 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.999 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:29.999 null2 00:08:29.999 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:29.999 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.999 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:30.258 null3 00:08:30.258 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.258 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.258 18:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:30.516 null4 00:08:30.516 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.516 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.516 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:30.814 null5 00:08:30.814 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.814 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.814 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:31.073 null6 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:31.073 null7 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:31.073 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:31.074 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.074 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:31.074 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.074 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:31.074 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:31.332 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69546 69547 69549 69551 69554 69555 69558 69559 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.333 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.591 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.591 18:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.591 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.591 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.591 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.591 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.591 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.591 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.592 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.592 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.592 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.592 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.592 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.592 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.592 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.851 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.852 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.852 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.852 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.852 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.112 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.370 18:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.628 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.629 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.629 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.629 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.629 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.886 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.144 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.410 18:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.410 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.670 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.929 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.930 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.190 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.448 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.448 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.448 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.448 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.448 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.448 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.448 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.448 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.448 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.448 18:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.448 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.448 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.448 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.448 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.708 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.967 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.225 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.484 18:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.484 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.484 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.484 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.484 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.742 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.742 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.743 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.260 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:36.260 rmmod nvme_tcp 00:08:36.260 rmmod nvme_fabrics 00:08:36.260 rmmod nvme_keyring 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68361 ']' 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68361 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68361 ']' 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68361 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68361 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:36.518 killing process with pid 68361 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68361' 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68361 00:08:36.518 18:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68361 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:36.776 00:08:36.776 real 0m41.813s 00:08:36.776 user 3m13.005s 00:08:36.776 sys 0m15.444s 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.776 18:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.776 ************************************ 00:08:36.776 END TEST nvmf_ns_hotplug_stress 00:08:36.776 ************************************ 00:08:36.776 18:27:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:36.776 18:27:59 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:36.776 18:27:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:36.776 18:27:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.776 18:27:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.776 ************************************ 00:08:36.776 START TEST nvmf_connect_stress 00:08:36.776 ************************************ 00:08:36.776 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:37.035 * Looking for test storage... 00:08:37.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.035 18:27:59 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:37.036 Cannot find device "nvmf_tgt_br" 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:37.036 Cannot find device "nvmf_tgt_br2" 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:37.036 Cannot find device "nvmf_tgt_br" 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:37.036 Cannot find device "nvmf_tgt_br2" 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:37.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:37.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:37.036 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:37.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:08:37.296 00:08:37.296 --- 10.0.0.2 ping statistics --- 00:08:37.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.296 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:37.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:37.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:08:37.296 00:08:37.296 --- 10.0.0.3 ping statistics --- 00:08:37.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.296 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:37.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:08:37.296 00:08:37.296 --- 10.0.0.1 ping statistics --- 00:08:37.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.296 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:37.296 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.554 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=70891 00:08:37.554 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 70891 00:08:37.554 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 70891 ']' 00:08:37.554 18:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:37.554 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.554 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.554 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.554 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.554 18:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.554 [2024-07-15 18:27:59.962401] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:08:37.554 [2024-07-15 18:27:59.962477] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.554 [2024-07-15 18:28:00.093148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.812 [2024-07-15 18:28:00.190919] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.812 [2024-07-15 18:28:00.190972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.812 [2024-07-15 18:28:00.190981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.813 [2024-07-15 18:28:00.190989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.813 [2024-07-15 18:28:00.190996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.813 [2024-07-15 18:28:00.191984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.813 [2024-07-15 18:28:00.192078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.813 [2024-07-15 18:28:00.192079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.379 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.379 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:38.379 18:28:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.379 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.379 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.379 18:28:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.379 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.379 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.379 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.380 [2024-07-15 18:28:00.928374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.380 [2024-07-15 18:28:00.953833] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.380 NULL1 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=70944 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.380 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.637 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.638 18:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.926 18:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.926 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:38.926 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.926 18:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.926 18:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.184 18:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.184 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:39.184 18:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.184 18:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.184 18:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.750 18:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.750 18:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:39.750 18:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.750 18:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.750 18:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.007 18:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.007 18:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:40.007 18:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.007 18:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.007 18:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.265 18:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.265 18:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:40.265 18:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.265 18:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.265 18:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.522 18:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.522 18:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:40.522 18:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.522 18:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.522 18:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.779 18:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.779 18:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:40.779 18:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.779 18:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.779 18:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.344 18:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.344 18:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:41.344 18:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.344 18:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.344 18:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.601 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.601 18:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:41.601 18:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.601 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.602 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.875 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.875 18:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:41.875 18:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.875 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.875 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.133 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.133 18:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:42.133 18:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.133 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.133 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.405 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.405 18:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:42.405 18:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.405 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.405 18:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.983 18:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.983 18:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:42.983 18:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.983 18:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.983 18:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.240 18:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.240 18:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:43.240 18:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.240 18:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.240 18:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.497 18:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.497 18:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:43.497 18:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.497 18:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.498 18:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.756 18:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.756 18:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:43.756 18:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.756 18:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.756 18:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.013 18:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.013 18:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:44.013 18:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.013 18:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.013 18:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.581 18:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.581 18:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:44.581 18:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.581 18:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.581 18:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.839 18:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.839 18:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:44.839 18:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.839 18:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.839 18:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.096 18:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.096 18:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:45.096 18:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.096 18:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.096 18:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.355 18:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.355 18:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:45.355 18:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.355 18:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.355 18:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 18:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.938 18:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:45.938 18:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.938 18:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.938 18:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.195 18:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.195 18:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:46.195 18:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.195 18:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.195 18:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.454 18:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.454 18:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:46.454 18:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.454 18:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.454 18:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.711 18:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.711 18:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:46.711 18:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.711 18:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.711 18:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.969 18:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.969 18:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:46.969 18:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.969 18:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.969 18:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.536 18:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.536 18:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:47.536 18:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.536 18:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.536 18:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.794 18:28:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.794 18:28:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:47.794 18:28:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.794 18:28:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.794 18:28:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.053 18:28:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.053 18:28:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:48.053 18:28:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.053 18:28:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.053 18:28:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.318 18:28:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.318 18:28:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:48.318 18:28:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.318 18:28:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.318 18:28:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.586 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.586 18:28:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:48.586 18:28:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.586 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.586 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.586 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70944 00:08:48.934 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (70944) - No such process 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 70944 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.934 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.190 rmmod nvme_tcp 00:08:49.191 rmmod nvme_fabrics 00:08:49.191 rmmod nvme_keyring 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 70891 ']' 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 70891 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 70891 ']' 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 70891 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70891 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:49.191 killing process with pid 70891 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70891' 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 70891 00:08:49.191 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 70891 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:49.448 00:08:49.448 real 0m12.601s 00:08:49.448 user 0m40.623s 00:08:49.448 sys 0m4.523s 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.448 18:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.448 ************************************ 00:08:49.448 END TEST nvmf_connect_stress 00:08:49.448 ************************************ 00:08:49.448 18:28:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:49.448 18:28:11 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:49.448 18:28:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:49.448 18:28:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.448 18:28:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:49.448 ************************************ 00:08:49.448 START TEST nvmf_fused_ordering 00:08:49.448 ************************************ 00:08:49.448 18:28:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:49.448 * Looking for test storage... 00:08:49.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:49.448 18:28:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.448 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.706 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:49.707 Cannot find device "nvmf_tgt_br" 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.707 Cannot find device "nvmf_tgt_br2" 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:49.707 Cannot find device "nvmf_tgt_br" 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:49.707 Cannot find device "nvmf_tgt_br2" 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.707 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:49.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:08:49.965 00:08:49.965 --- 10.0.0.2 ping statistics --- 00:08:49.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.965 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:49.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:49.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:08:49.965 00:08:49.965 --- 10.0.0.3 ping statistics --- 00:08:49.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.965 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:49.965 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:49.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:08:49.965 00:08:49.965 --- 10.0.0.1 ping statistics --- 00:08:49.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.966 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71268 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71268 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71268 ']' 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.966 18:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:50.224 [2024-07-15 18:28:12.582221] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:08:50.224 [2024-07-15 18:28:12.582307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.224 [2024-07-15 18:28:12.724749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.224 [2024-07-15 18:28:12.822137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.224 [2024-07-15 18:28:12.822186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.224 [2024-07-15 18:28:12.822195] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.224 [2024-07-15 18:28:12.822203] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.224 [2024-07-15 18:28:12.822210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.224 [2024-07-15 18:28:12.822239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:51.159 [2024-07-15 18:28:13.538743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:51.159 [2024-07-15 18:28:13.562812] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:51.159 NULL1 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.159 18:28:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:51.159 [2024-07-15 18:28:13.633133] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:08:51.159 [2024-07-15 18:28:13.633188] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71317 ] 00:08:51.726 Attached to nqn.2016-06.io.spdk:cnode1 00:08:51.726 Namespace ID: 1 size: 1GB 00:08:51.726 fused_ordering(0) 00:08:51.726 fused_ordering(1) 00:08:51.726 fused_ordering(2) 00:08:51.726 fused_ordering(3) 00:08:51.726 fused_ordering(4) 00:08:51.726 fused_ordering(5) 00:08:51.726 fused_ordering(6) 00:08:51.726 fused_ordering(7) 00:08:51.726 fused_ordering(8) 00:08:51.726 fused_ordering(9) 00:08:51.726 fused_ordering(10) 00:08:51.726 fused_ordering(11) 00:08:51.726 fused_ordering(12) 00:08:51.726 fused_ordering(13) 00:08:51.726 fused_ordering(14) 00:08:51.726 fused_ordering(15) 00:08:51.726 fused_ordering(16) 00:08:51.726 fused_ordering(17) 00:08:51.726 fused_ordering(18) 00:08:51.726 fused_ordering(19) 00:08:51.726 fused_ordering(20) 00:08:51.726 fused_ordering(21) 00:08:51.726 fused_ordering(22) 00:08:51.726 fused_ordering(23) 00:08:51.726 fused_ordering(24) 00:08:51.726 fused_ordering(25) 00:08:51.726 fused_ordering(26) 00:08:51.726 fused_ordering(27) 00:08:51.726 fused_ordering(28) 00:08:51.726 fused_ordering(29) 00:08:51.726 fused_ordering(30) 00:08:51.726 fused_ordering(31) 00:08:51.726 fused_ordering(32) 00:08:51.726 fused_ordering(33) 00:08:51.727 fused_ordering(34) 00:08:51.727 fused_ordering(35) 00:08:51.727 fused_ordering(36) 00:08:51.727 fused_ordering(37) 00:08:51.727 fused_ordering(38) 00:08:51.727 fused_ordering(39) 00:08:51.727 fused_ordering(40) 00:08:51.727 fused_ordering(41) 00:08:51.727 fused_ordering(42) 00:08:51.727 fused_ordering(43) 00:08:51.727 fused_ordering(44) 00:08:51.727 fused_ordering(45) 00:08:51.727 fused_ordering(46) 00:08:51.727 fused_ordering(47) 00:08:51.727 fused_ordering(48) 00:08:51.727 fused_ordering(49) 00:08:51.727 fused_ordering(50) 00:08:51.727 fused_ordering(51) 00:08:51.727 fused_ordering(52) 00:08:51.727 fused_ordering(53) 00:08:51.727 fused_ordering(54) 00:08:51.727 fused_ordering(55) 00:08:51.727 fused_ordering(56) 00:08:51.727 fused_ordering(57) 00:08:51.727 fused_ordering(58) 00:08:51.727 fused_ordering(59) 00:08:51.727 fused_ordering(60) 00:08:51.727 fused_ordering(61) 00:08:51.727 fused_ordering(62) 00:08:51.727 fused_ordering(63) 00:08:51.727 fused_ordering(64) 00:08:51.727 fused_ordering(65) 00:08:51.727 fused_ordering(66) 00:08:51.727 fused_ordering(67) 00:08:51.727 fused_ordering(68) 00:08:51.727 fused_ordering(69) 00:08:51.727 fused_ordering(70) 00:08:51.727 fused_ordering(71) 00:08:51.727 fused_ordering(72) 00:08:51.727 fused_ordering(73) 00:08:51.727 fused_ordering(74) 00:08:51.727 fused_ordering(75) 00:08:51.727 fused_ordering(76) 00:08:51.727 fused_ordering(77) 00:08:51.727 fused_ordering(78) 00:08:51.727 fused_ordering(79) 00:08:51.727 fused_ordering(80) 00:08:51.727 fused_ordering(81) 00:08:51.727 fused_ordering(82) 00:08:51.727 fused_ordering(83) 00:08:51.727 fused_ordering(84) 00:08:51.727 fused_ordering(85) 00:08:51.727 fused_ordering(86) 00:08:51.727 fused_ordering(87) 00:08:51.727 fused_ordering(88) 00:08:51.727 fused_ordering(89) 00:08:51.727 fused_ordering(90) 00:08:51.727 fused_ordering(91) 00:08:51.727 fused_ordering(92) 00:08:51.727 fused_ordering(93) 00:08:51.727 fused_ordering(94) 00:08:51.727 fused_ordering(95) 00:08:51.727 fused_ordering(96) 00:08:51.727 fused_ordering(97) 00:08:51.727 fused_ordering(98) 00:08:51.727 fused_ordering(99) 00:08:51.727 fused_ordering(100) 00:08:51.727 fused_ordering(101) 00:08:51.727 fused_ordering(102) 00:08:51.727 fused_ordering(103) 00:08:51.727 fused_ordering(104) 00:08:51.727 fused_ordering(105) 00:08:51.727 fused_ordering(106) 00:08:51.727 fused_ordering(107) 00:08:51.727 fused_ordering(108) 00:08:51.727 fused_ordering(109) 00:08:51.727 fused_ordering(110) 00:08:51.727 fused_ordering(111) 00:08:51.727 fused_ordering(112) 00:08:51.727 fused_ordering(113) 00:08:51.727 fused_ordering(114) 00:08:51.727 fused_ordering(115) 00:08:51.727 fused_ordering(116) 00:08:51.727 fused_ordering(117) 00:08:51.727 fused_ordering(118) 00:08:51.727 fused_ordering(119) 00:08:51.727 fused_ordering(120) 00:08:51.727 fused_ordering(121) 00:08:51.727 fused_ordering(122) 00:08:51.727 fused_ordering(123) 00:08:51.727 fused_ordering(124) 00:08:51.727 fused_ordering(125) 00:08:51.727 fused_ordering(126) 00:08:51.727 fused_ordering(127) 00:08:51.727 fused_ordering(128) 00:08:51.727 fused_ordering(129) 00:08:51.727 fused_ordering(130) 00:08:51.727 fused_ordering(131) 00:08:51.727 fused_ordering(132) 00:08:51.727 fused_ordering(133) 00:08:51.727 fused_ordering(134) 00:08:51.727 fused_ordering(135) 00:08:51.727 fused_ordering(136) 00:08:51.727 fused_ordering(137) 00:08:51.727 fused_ordering(138) 00:08:51.727 fused_ordering(139) 00:08:51.727 fused_ordering(140) 00:08:51.727 fused_ordering(141) 00:08:51.727 fused_ordering(142) 00:08:51.727 fused_ordering(143) 00:08:51.727 fused_ordering(144) 00:08:51.727 fused_ordering(145) 00:08:51.727 fused_ordering(146) 00:08:51.727 fused_ordering(147) 00:08:51.727 fused_ordering(148) 00:08:51.727 fused_ordering(149) 00:08:51.727 fused_ordering(150) 00:08:51.727 fused_ordering(151) 00:08:51.727 fused_ordering(152) 00:08:51.727 fused_ordering(153) 00:08:51.727 fused_ordering(154) 00:08:51.727 fused_ordering(155) 00:08:51.727 fused_ordering(156) 00:08:51.727 fused_ordering(157) 00:08:51.727 fused_ordering(158) 00:08:51.727 fused_ordering(159) 00:08:51.727 fused_ordering(160) 00:08:51.727 fused_ordering(161) 00:08:51.727 fused_ordering(162) 00:08:51.727 fused_ordering(163) 00:08:51.727 fused_ordering(164) 00:08:51.727 fused_ordering(165) 00:08:51.727 fused_ordering(166) 00:08:51.727 fused_ordering(167) 00:08:51.727 fused_ordering(168) 00:08:51.727 fused_ordering(169) 00:08:51.727 fused_ordering(170) 00:08:51.727 fused_ordering(171) 00:08:51.727 fused_ordering(172) 00:08:51.727 fused_ordering(173) 00:08:51.727 fused_ordering(174) 00:08:51.727 fused_ordering(175) 00:08:51.727 fused_ordering(176) 00:08:51.727 fused_ordering(177) 00:08:51.727 fused_ordering(178) 00:08:51.727 fused_ordering(179) 00:08:51.727 fused_ordering(180) 00:08:51.727 fused_ordering(181) 00:08:51.727 fused_ordering(182) 00:08:51.727 fused_ordering(183) 00:08:51.727 fused_ordering(184) 00:08:51.727 fused_ordering(185) 00:08:51.727 fused_ordering(186) 00:08:51.727 fused_ordering(187) 00:08:51.727 fused_ordering(188) 00:08:51.727 fused_ordering(189) 00:08:51.727 fused_ordering(190) 00:08:51.727 fused_ordering(191) 00:08:51.727 fused_ordering(192) 00:08:51.727 fused_ordering(193) 00:08:51.727 fused_ordering(194) 00:08:51.727 fused_ordering(195) 00:08:51.727 fused_ordering(196) 00:08:51.727 fused_ordering(197) 00:08:51.727 fused_ordering(198) 00:08:51.727 fused_ordering(199) 00:08:51.727 fused_ordering(200) 00:08:51.727 fused_ordering(201) 00:08:51.727 fused_ordering(202) 00:08:51.727 fused_ordering(203) 00:08:51.727 fused_ordering(204) 00:08:51.727 fused_ordering(205) 00:08:51.727 fused_ordering(206) 00:08:51.727 fused_ordering(207) 00:08:51.727 fused_ordering(208) 00:08:51.727 fused_ordering(209) 00:08:51.727 fused_ordering(210) 00:08:51.727 fused_ordering(211) 00:08:51.727 fused_ordering(212) 00:08:51.727 fused_ordering(213) 00:08:51.727 fused_ordering(214) 00:08:51.727 fused_ordering(215) 00:08:51.727 fused_ordering(216) 00:08:51.727 fused_ordering(217) 00:08:51.727 fused_ordering(218) 00:08:51.727 fused_ordering(219) 00:08:51.727 fused_ordering(220) 00:08:51.727 fused_ordering(221) 00:08:51.727 fused_ordering(222) 00:08:51.727 fused_ordering(223) 00:08:51.727 fused_ordering(224) 00:08:51.727 fused_ordering(225) 00:08:51.727 fused_ordering(226) 00:08:51.727 fused_ordering(227) 00:08:51.727 fused_ordering(228) 00:08:51.727 fused_ordering(229) 00:08:51.727 fused_ordering(230) 00:08:51.727 fused_ordering(231) 00:08:51.727 fused_ordering(232) 00:08:51.727 fused_ordering(233) 00:08:51.727 fused_ordering(234) 00:08:51.727 fused_ordering(235) 00:08:51.727 fused_ordering(236) 00:08:51.727 fused_ordering(237) 00:08:51.727 fused_ordering(238) 00:08:51.727 fused_ordering(239) 00:08:51.727 fused_ordering(240) 00:08:51.727 fused_ordering(241) 00:08:51.727 fused_ordering(242) 00:08:51.727 fused_ordering(243) 00:08:51.727 fused_ordering(244) 00:08:51.727 fused_ordering(245) 00:08:51.727 fused_ordering(246) 00:08:51.727 fused_ordering(247) 00:08:51.727 fused_ordering(248) 00:08:51.727 fused_ordering(249) 00:08:51.727 fused_ordering(250) 00:08:51.727 fused_ordering(251) 00:08:51.727 fused_ordering(252) 00:08:51.727 fused_ordering(253) 00:08:51.727 fused_ordering(254) 00:08:51.727 fused_ordering(255) 00:08:51.727 fused_ordering(256) 00:08:51.727 fused_ordering(257) 00:08:51.727 fused_ordering(258) 00:08:51.727 fused_ordering(259) 00:08:51.727 fused_ordering(260) 00:08:51.727 fused_ordering(261) 00:08:51.727 fused_ordering(262) 00:08:51.727 fused_ordering(263) 00:08:51.727 fused_ordering(264) 00:08:51.727 fused_ordering(265) 00:08:51.727 fused_ordering(266) 00:08:51.727 fused_ordering(267) 00:08:51.727 fused_ordering(268) 00:08:51.727 fused_ordering(269) 00:08:51.727 fused_ordering(270) 00:08:51.727 fused_ordering(271) 00:08:51.727 fused_ordering(272) 00:08:51.727 fused_ordering(273) 00:08:51.727 fused_ordering(274) 00:08:51.727 fused_ordering(275) 00:08:51.727 fused_ordering(276) 00:08:51.727 fused_ordering(277) 00:08:51.727 fused_ordering(278) 00:08:51.727 fused_ordering(279) 00:08:51.727 fused_ordering(280) 00:08:51.727 fused_ordering(281) 00:08:51.727 fused_ordering(282) 00:08:51.727 fused_ordering(283) 00:08:51.727 fused_ordering(284) 00:08:51.727 fused_ordering(285) 00:08:51.727 fused_ordering(286) 00:08:51.727 fused_ordering(287) 00:08:51.727 fused_ordering(288) 00:08:51.727 fused_ordering(289) 00:08:51.727 fused_ordering(290) 00:08:51.727 fused_ordering(291) 00:08:51.727 fused_ordering(292) 00:08:51.727 fused_ordering(293) 00:08:51.727 fused_ordering(294) 00:08:51.727 fused_ordering(295) 00:08:51.727 fused_ordering(296) 00:08:51.727 fused_ordering(297) 00:08:51.727 fused_ordering(298) 00:08:51.727 fused_ordering(299) 00:08:51.727 fused_ordering(300) 00:08:51.727 fused_ordering(301) 00:08:51.727 fused_ordering(302) 00:08:51.727 fused_ordering(303) 00:08:51.727 fused_ordering(304) 00:08:51.727 fused_ordering(305) 00:08:51.727 fused_ordering(306) 00:08:51.727 fused_ordering(307) 00:08:51.727 fused_ordering(308) 00:08:51.727 fused_ordering(309) 00:08:51.727 fused_ordering(310) 00:08:51.727 fused_ordering(311) 00:08:51.727 fused_ordering(312) 00:08:51.727 fused_ordering(313) 00:08:51.727 fused_ordering(314) 00:08:51.727 fused_ordering(315) 00:08:51.727 fused_ordering(316) 00:08:51.727 fused_ordering(317) 00:08:51.727 fused_ordering(318) 00:08:51.727 fused_ordering(319) 00:08:51.727 fused_ordering(320) 00:08:51.727 fused_ordering(321) 00:08:51.727 fused_ordering(322) 00:08:51.727 fused_ordering(323) 00:08:51.727 fused_ordering(324) 00:08:51.727 fused_ordering(325) 00:08:51.727 fused_ordering(326) 00:08:51.728 fused_ordering(327) 00:08:51.728 fused_ordering(328) 00:08:51.728 fused_ordering(329) 00:08:51.728 fused_ordering(330) 00:08:51.728 fused_ordering(331) 00:08:51.728 fused_ordering(332) 00:08:51.728 fused_ordering(333) 00:08:51.728 fused_ordering(334) 00:08:51.728 fused_ordering(335) 00:08:51.728 fused_ordering(336) 00:08:51.728 fused_ordering(337) 00:08:51.728 fused_ordering(338) 00:08:51.728 fused_ordering(339) 00:08:51.728 fused_ordering(340) 00:08:51.728 fused_ordering(341) 00:08:51.728 fused_ordering(342) 00:08:51.728 fused_ordering(343) 00:08:51.728 fused_ordering(344) 00:08:51.728 fused_ordering(345) 00:08:51.728 fused_ordering(346) 00:08:51.728 fused_ordering(347) 00:08:51.728 fused_ordering(348) 00:08:51.728 fused_ordering(349) 00:08:51.728 fused_ordering(350) 00:08:51.728 fused_ordering(351) 00:08:51.728 fused_ordering(352) 00:08:51.728 fused_ordering(353) 00:08:51.728 fused_ordering(354) 00:08:51.728 fused_ordering(355) 00:08:51.728 fused_ordering(356) 00:08:51.728 fused_ordering(357) 00:08:51.728 fused_ordering(358) 00:08:51.728 fused_ordering(359) 00:08:51.728 fused_ordering(360) 00:08:51.728 fused_ordering(361) 00:08:51.728 fused_ordering(362) 00:08:51.728 fused_ordering(363) 00:08:51.728 fused_ordering(364) 00:08:51.728 fused_ordering(365) 00:08:51.728 fused_ordering(366) 00:08:51.728 fused_ordering(367) 00:08:51.728 fused_ordering(368) 00:08:51.728 fused_ordering(369) 00:08:51.728 fused_ordering(370) 00:08:51.728 fused_ordering(371) 00:08:51.728 fused_ordering(372) 00:08:51.728 fused_ordering(373) 00:08:51.728 fused_ordering(374) 00:08:51.728 fused_ordering(375) 00:08:51.728 fused_ordering(376) 00:08:51.728 fused_ordering(377) 00:08:51.728 fused_ordering(378) 00:08:51.728 fused_ordering(379) 00:08:51.728 fused_ordering(380) 00:08:51.728 fused_ordering(381) 00:08:51.728 fused_ordering(382) 00:08:51.728 fused_ordering(383) 00:08:51.728 fused_ordering(384) 00:08:51.728 fused_ordering(385) 00:08:51.728 fused_ordering(386) 00:08:51.728 fused_ordering(387) 00:08:51.728 fused_ordering(388) 00:08:51.728 fused_ordering(389) 00:08:51.728 fused_ordering(390) 00:08:51.728 fused_ordering(391) 00:08:51.728 fused_ordering(392) 00:08:51.728 fused_ordering(393) 00:08:51.728 fused_ordering(394) 00:08:51.728 fused_ordering(395) 00:08:51.728 fused_ordering(396) 00:08:51.728 fused_ordering(397) 00:08:51.728 fused_ordering(398) 00:08:51.728 fused_ordering(399) 00:08:51.728 fused_ordering(400) 00:08:51.728 fused_ordering(401) 00:08:51.728 fused_ordering(402) 00:08:51.728 fused_ordering(403) 00:08:51.728 fused_ordering(404) 00:08:51.728 fused_ordering(405) 00:08:51.728 fused_ordering(406) 00:08:51.728 fused_ordering(407) 00:08:51.728 fused_ordering(408) 00:08:51.728 fused_ordering(409) 00:08:51.728 fused_ordering(410) 00:08:52.297 fused_ordering(411) 00:08:52.297 fused_ordering(412) 00:08:52.297 fused_ordering(413) 00:08:52.297 fused_ordering(414) 00:08:52.297 fused_ordering(415) 00:08:52.297 fused_ordering(416) 00:08:52.297 fused_ordering(417) 00:08:52.297 fused_ordering(418) 00:08:52.297 fused_ordering(419) 00:08:52.297 fused_ordering(420) 00:08:52.297 fused_ordering(421) 00:08:52.297 fused_ordering(422) 00:08:52.297 fused_ordering(423) 00:08:52.297 fused_ordering(424) 00:08:52.297 fused_ordering(425) 00:08:52.297 fused_ordering(426) 00:08:52.297 fused_ordering(427) 00:08:52.297 fused_ordering(428) 00:08:52.297 fused_ordering(429) 00:08:52.297 fused_ordering(430) 00:08:52.297 fused_ordering(431) 00:08:52.297 fused_ordering(432) 00:08:52.297 fused_ordering(433) 00:08:52.297 fused_ordering(434) 00:08:52.297 fused_ordering(435) 00:08:52.297 fused_ordering(436) 00:08:52.297 fused_ordering(437) 00:08:52.297 fused_ordering(438) 00:08:52.297 fused_ordering(439) 00:08:52.297 fused_ordering(440) 00:08:52.297 fused_ordering(441) 00:08:52.297 fused_ordering(442) 00:08:52.297 fused_ordering(443) 00:08:52.297 fused_ordering(444) 00:08:52.297 fused_ordering(445) 00:08:52.297 fused_ordering(446) 00:08:52.297 fused_ordering(447) 00:08:52.297 fused_ordering(448) 00:08:52.297 fused_ordering(449) 00:08:52.297 fused_ordering(450) 00:08:52.297 fused_ordering(451) 00:08:52.297 fused_ordering(452) 00:08:52.297 fused_ordering(453) 00:08:52.297 fused_ordering(454) 00:08:52.297 fused_ordering(455) 00:08:52.297 fused_ordering(456) 00:08:52.297 fused_ordering(457) 00:08:52.297 fused_ordering(458) 00:08:52.297 fused_ordering(459) 00:08:52.297 fused_ordering(460) 00:08:52.297 fused_ordering(461) 00:08:52.297 fused_ordering(462) 00:08:52.297 fused_ordering(463) 00:08:52.297 fused_ordering(464) 00:08:52.297 fused_ordering(465) 00:08:52.297 fused_ordering(466) 00:08:52.297 fused_ordering(467) 00:08:52.297 fused_ordering(468) 00:08:52.297 fused_ordering(469) 00:08:52.297 fused_ordering(470) 00:08:52.297 fused_ordering(471) 00:08:52.297 fused_ordering(472) 00:08:52.297 fused_ordering(473) 00:08:52.297 fused_ordering(474) 00:08:52.297 fused_ordering(475) 00:08:52.297 fused_ordering(476) 00:08:52.297 fused_ordering(477) 00:08:52.297 fused_ordering(478) 00:08:52.297 fused_ordering(479) 00:08:52.297 fused_ordering(480) 00:08:52.297 fused_ordering(481) 00:08:52.297 fused_ordering(482) 00:08:52.297 fused_ordering(483) 00:08:52.297 fused_ordering(484) 00:08:52.297 fused_ordering(485) 00:08:52.297 fused_ordering(486) 00:08:52.297 fused_ordering(487) 00:08:52.297 fused_ordering(488) 00:08:52.297 fused_ordering(489) 00:08:52.297 fused_ordering(490) 00:08:52.297 fused_ordering(491) 00:08:52.297 fused_ordering(492) 00:08:52.297 fused_ordering(493) 00:08:52.297 fused_ordering(494) 00:08:52.297 fused_ordering(495) 00:08:52.297 fused_ordering(496) 00:08:52.297 fused_ordering(497) 00:08:52.297 fused_ordering(498) 00:08:52.297 fused_ordering(499) 00:08:52.297 fused_ordering(500) 00:08:52.297 fused_ordering(501) 00:08:52.297 fused_ordering(502) 00:08:52.297 fused_ordering(503) 00:08:52.297 fused_ordering(504) 00:08:52.297 fused_ordering(505) 00:08:52.297 fused_ordering(506) 00:08:52.297 fused_ordering(507) 00:08:52.297 fused_ordering(508) 00:08:52.297 fused_ordering(509) 00:08:52.297 fused_ordering(510) 00:08:52.297 fused_ordering(511) 00:08:52.297 fused_ordering(512) 00:08:52.297 fused_ordering(513) 00:08:52.297 fused_ordering(514) 00:08:52.297 fused_ordering(515) 00:08:52.297 fused_ordering(516) 00:08:52.297 fused_ordering(517) 00:08:52.297 fused_ordering(518) 00:08:52.297 fused_ordering(519) 00:08:52.297 fused_ordering(520) 00:08:52.297 fused_ordering(521) 00:08:52.297 fused_ordering(522) 00:08:52.297 fused_ordering(523) 00:08:52.297 fused_ordering(524) 00:08:52.297 fused_ordering(525) 00:08:52.297 fused_ordering(526) 00:08:52.297 fused_ordering(527) 00:08:52.297 fused_ordering(528) 00:08:52.297 fused_ordering(529) 00:08:52.297 fused_ordering(530) 00:08:52.297 fused_ordering(531) 00:08:52.297 fused_ordering(532) 00:08:52.297 fused_ordering(533) 00:08:52.297 fused_ordering(534) 00:08:52.297 fused_ordering(535) 00:08:52.297 fused_ordering(536) 00:08:52.297 fused_ordering(537) 00:08:52.297 fused_ordering(538) 00:08:52.297 fused_ordering(539) 00:08:52.297 fused_ordering(540) 00:08:52.297 fused_ordering(541) 00:08:52.297 fused_ordering(542) 00:08:52.297 fused_ordering(543) 00:08:52.297 fused_ordering(544) 00:08:52.297 fused_ordering(545) 00:08:52.297 fused_ordering(546) 00:08:52.297 fused_ordering(547) 00:08:52.297 fused_ordering(548) 00:08:52.297 fused_ordering(549) 00:08:52.297 fused_ordering(550) 00:08:52.297 fused_ordering(551) 00:08:52.297 fused_ordering(552) 00:08:52.297 fused_ordering(553) 00:08:52.297 fused_ordering(554) 00:08:52.297 fused_ordering(555) 00:08:52.297 fused_ordering(556) 00:08:52.297 fused_ordering(557) 00:08:52.297 fused_ordering(558) 00:08:52.297 fused_ordering(559) 00:08:52.297 fused_ordering(560) 00:08:52.297 fused_ordering(561) 00:08:52.297 fused_ordering(562) 00:08:52.297 fused_ordering(563) 00:08:52.297 fused_ordering(564) 00:08:52.297 fused_ordering(565) 00:08:52.297 fused_ordering(566) 00:08:52.297 fused_ordering(567) 00:08:52.297 fused_ordering(568) 00:08:52.298 fused_ordering(569) 00:08:52.298 fused_ordering(570) 00:08:52.298 fused_ordering(571) 00:08:52.298 fused_ordering(572) 00:08:52.298 fused_ordering(573) 00:08:52.298 fused_ordering(574) 00:08:52.298 fused_ordering(575) 00:08:52.298 fused_ordering(576) 00:08:52.298 fused_ordering(577) 00:08:52.298 fused_ordering(578) 00:08:52.298 fused_ordering(579) 00:08:52.298 fused_ordering(580) 00:08:52.298 fused_ordering(581) 00:08:52.298 fused_ordering(582) 00:08:52.298 fused_ordering(583) 00:08:52.298 fused_ordering(584) 00:08:52.298 fused_ordering(585) 00:08:52.298 fused_ordering(586) 00:08:52.298 fused_ordering(587) 00:08:52.298 fused_ordering(588) 00:08:52.298 fused_ordering(589) 00:08:52.298 fused_ordering(590) 00:08:52.298 fused_ordering(591) 00:08:52.298 fused_ordering(592) 00:08:52.298 fused_ordering(593) 00:08:52.298 fused_ordering(594) 00:08:52.298 fused_ordering(595) 00:08:52.298 fused_ordering(596) 00:08:52.298 fused_ordering(597) 00:08:52.298 fused_ordering(598) 00:08:52.298 fused_ordering(599) 00:08:52.298 fused_ordering(600) 00:08:52.298 fused_ordering(601) 00:08:52.298 fused_ordering(602) 00:08:52.298 fused_ordering(603) 00:08:52.298 fused_ordering(604) 00:08:52.298 fused_ordering(605) 00:08:52.298 fused_ordering(606) 00:08:52.298 fused_ordering(607) 00:08:52.298 fused_ordering(608) 00:08:52.298 fused_ordering(609) 00:08:52.298 fused_ordering(610) 00:08:52.298 fused_ordering(611) 00:08:52.298 fused_ordering(612) 00:08:52.298 fused_ordering(613) 00:08:52.298 fused_ordering(614) 00:08:52.298 fused_ordering(615) 00:08:52.557 fused_ordering(616) 00:08:52.557 fused_ordering(617) 00:08:52.557 fused_ordering(618) 00:08:52.557 fused_ordering(619) 00:08:52.557 fused_ordering(620) 00:08:52.557 fused_ordering(621) 00:08:52.557 fused_ordering(622) 00:08:52.557 fused_ordering(623) 00:08:52.557 fused_ordering(624) 00:08:52.557 fused_ordering(625) 00:08:52.557 fused_ordering(626) 00:08:52.557 fused_ordering(627) 00:08:52.557 fused_ordering(628) 00:08:52.557 fused_ordering(629) 00:08:52.557 fused_ordering(630) 00:08:52.557 fused_ordering(631) 00:08:52.557 fused_ordering(632) 00:08:52.557 fused_ordering(633) 00:08:52.557 fused_ordering(634) 00:08:52.557 fused_ordering(635) 00:08:52.557 fused_ordering(636) 00:08:52.557 fused_ordering(637) 00:08:52.557 fused_ordering(638) 00:08:52.557 fused_ordering(639) 00:08:52.557 fused_ordering(640) 00:08:52.557 fused_ordering(641) 00:08:52.557 fused_ordering(642) 00:08:52.557 fused_ordering(643) 00:08:52.557 fused_ordering(644) 00:08:52.557 fused_ordering(645) 00:08:52.557 fused_ordering(646) 00:08:52.557 fused_ordering(647) 00:08:52.557 fused_ordering(648) 00:08:52.557 fused_ordering(649) 00:08:52.557 fused_ordering(650) 00:08:52.557 fused_ordering(651) 00:08:52.557 fused_ordering(652) 00:08:52.557 fused_ordering(653) 00:08:52.557 fused_ordering(654) 00:08:52.557 fused_ordering(655) 00:08:52.557 fused_ordering(656) 00:08:52.557 fused_ordering(657) 00:08:52.557 fused_ordering(658) 00:08:52.557 fused_ordering(659) 00:08:52.557 fused_ordering(660) 00:08:52.557 fused_ordering(661) 00:08:52.557 fused_ordering(662) 00:08:52.557 fused_ordering(663) 00:08:52.557 fused_ordering(664) 00:08:52.557 fused_ordering(665) 00:08:52.557 fused_ordering(666) 00:08:52.557 fused_ordering(667) 00:08:52.557 fused_ordering(668) 00:08:52.557 fused_ordering(669) 00:08:52.557 fused_ordering(670) 00:08:52.557 fused_ordering(671) 00:08:52.557 fused_ordering(672) 00:08:52.557 fused_ordering(673) 00:08:52.557 fused_ordering(674) 00:08:52.557 fused_ordering(675) 00:08:52.557 fused_ordering(676) 00:08:52.557 fused_ordering(677) 00:08:52.557 fused_ordering(678) 00:08:52.557 fused_ordering(679) 00:08:52.557 fused_ordering(680) 00:08:52.557 fused_ordering(681) 00:08:52.557 fused_ordering(682) 00:08:52.557 fused_ordering(683) 00:08:52.557 fused_ordering(684) 00:08:52.557 fused_ordering(685) 00:08:52.557 fused_ordering(686) 00:08:52.557 fused_ordering(687) 00:08:52.557 fused_ordering(688) 00:08:52.557 fused_ordering(689) 00:08:52.557 fused_ordering(690) 00:08:52.557 fused_ordering(691) 00:08:52.557 fused_ordering(692) 00:08:52.557 fused_ordering(693) 00:08:52.557 fused_ordering(694) 00:08:52.557 fused_ordering(695) 00:08:52.557 fused_ordering(696) 00:08:52.557 fused_ordering(697) 00:08:52.557 fused_ordering(698) 00:08:52.557 fused_ordering(699) 00:08:52.557 fused_ordering(700) 00:08:52.557 fused_ordering(701) 00:08:52.557 fused_ordering(702) 00:08:52.557 fused_ordering(703) 00:08:52.557 fused_ordering(704) 00:08:52.557 fused_ordering(705) 00:08:52.557 fused_ordering(706) 00:08:52.557 fused_ordering(707) 00:08:52.557 fused_ordering(708) 00:08:52.557 fused_ordering(709) 00:08:52.557 fused_ordering(710) 00:08:52.557 fused_ordering(711) 00:08:52.557 fused_ordering(712) 00:08:52.557 fused_ordering(713) 00:08:52.557 fused_ordering(714) 00:08:52.557 fused_ordering(715) 00:08:52.557 fused_ordering(716) 00:08:52.557 fused_ordering(717) 00:08:52.557 fused_ordering(718) 00:08:52.557 fused_ordering(719) 00:08:52.557 fused_ordering(720) 00:08:52.557 fused_ordering(721) 00:08:52.557 fused_ordering(722) 00:08:52.557 fused_ordering(723) 00:08:52.557 fused_ordering(724) 00:08:52.557 fused_ordering(725) 00:08:52.557 fused_ordering(726) 00:08:52.557 fused_ordering(727) 00:08:52.557 fused_ordering(728) 00:08:52.557 fused_ordering(729) 00:08:52.557 fused_ordering(730) 00:08:52.557 fused_ordering(731) 00:08:52.557 fused_ordering(732) 00:08:52.557 fused_ordering(733) 00:08:52.557 fused_ordering(734) 00:08:52.557 fused_ordering(735) 00:08:52.557 fused_ordering(736) 00:08:52.557 fused_ordering(737) 00:08:52.557 fused_ordering(738) 00:08:52.557 fused_ordering(739) 00:08:52.557 fused_ordering(740) 00:08:52.557 fused_ordering(741) 00:08:52.557 fused_ordering(742) 00:08:52.557 fused_ordering(743) 00:08:52.557 fused_ordering(744) 00:08:52.557 fused_ordering(745) 00:08:52.557 fused_ordering(746) 00:08:52.557 fused_ordering(747) 00:08:52.557 fused_ordering(748) 00:08:52.557 fused_ordering(749) 00:08:52.557 fused_ordering(750) 00:08:52.557 fused_ordering(751) 00:08:52.557 fused_ordering(752) 00:08:52.557 fused_ordering(753) 00:08:52.557 fused_ordering(754) 00:08:52.557 fused_ordering(755) 00:08:52.557 fused_ordering(756) 00:08:52.557 fused_ordering(757) 00:08:52.557 fused_ordering(758) 00:08:52.557 fused_ordering(759) 00:08:52.557 fused_ordering(760) 00:08:52.557 fused_ordering(761) 00:08:52.557 fused_ordering(762) 00:08:52.557 fused_ordering(763) 00:08:52.557 fused_ordering(764) 00:08:52.557 fused_ordering(765) 00:08:52.557 fused_ordering(766) 00:08:52.557 fused_ordering(767) 00:08:52.557 fused_ordering(768) 00:08:52.557 fused_ordering(769) 00:08:52.557 fused_ordering(770) 00:08:52.557 fused_ordering(771) 00:08:52.557 fused_ordering(772) 00:08:52.557 fused_ordering(773) 00:08:52.557 fused_ordering(774) 00:08:52.557 fused_ordering(775) 00:08:52.557 fused_ordering(776) 00:08:52.557 fused_ordering(777) 00:08:52.557 fused_ordering(778) 00:08:52.557 fused_ordering(779) 00:08:52.557 fused_ordering(780) 00:08:52.557 fused_ordering(781) 00:08:52.557 fused_ordering(782) 00:08:52.557 fused_ordering(783) 00:08:52.557 fused_ordering(784) 00:08:52.557 fused_ordering(785) 00:08:52.557 fused_ordering(786) 00:08:52.557 fused_ordering(787) 00:08:52.557 fused_ordering(788) 00:08:52.557 fused_ordering(789) 00:08:52.557 fused_ordering(790) 00:08:52.557 fused_ordering(791) 00:08:52.557 fused_ordering(792) 00:08:52.557 fused_ordering(793) 00:08:52.557 fused_ordering(794) 00:08:52.557 fused_ordering(795) 00:08:52.557 fused_ordering(796) 00:08:52.557 fused_ordering(797) 00:08:52.557 fused_ordering(798) 00:08:52.557 fused_ordering(799) 00:08:52.557 fused_ordering(800) 00:08:52.557 fused_ordering(801) 00:08:52.557 fused_ordering(802) 00:08:52.557 fused_ordering(803) 00:08:52.557 fused_ordering(804) 00:08:52.557 fused_ordering(805) 00:08:52.557 fused_ordering(806) 00:08:52.557 fused_ordering(807) 00:08:52.557 fused_ordering(808) 00:08:52.557 fused_ordering(809) 00:08:52.557 fused_ordering(810) 00:08:52.557 fused_ordering(811) 00:08:52.557 fused_ordering(812) 00:08:52.557 fused_ordering(813) 00:08:52.557 fused_ordering(814) 00:08:52.557 fused_ordering(815) 00:08:52.557 fused_ordering(816) 00:08:52.557 fused_ordering(817) 00:08:52.557 fused_ordering(818) 00:08:52.557 fused_ordering(819) 00:08:52.557 fused_ordering(820) 00:08:53.124 fused_ordering(821) 00:08:53.124 fused_ordering(822) 00:08:53.124 fused_ordering(823) 00:08:53.124 fused_ordering(824) 00:08:53.124 fused_ordering(825) 00:08:53.124 fused_ordering(826) 00:08:53.124 fused_ordering(827) 00:08:53.124 fused_ordering(828) 00:08:53.124 fused_ordering(829) 00:08:53.124 fused_ordering(830) 00:08:53.124 fused_ordering(831) 00:08:53.124 fused_ordering(832) 00:08:53.124 fused_ordering(833) 00:08:53.124 fused_ordering(834) 00:08:53.124 fused_ordering(835) 00:08:53.124 fused_ordering(836) 00:08:53.124 fused_ordering(837) 00:08:53.124 fused_ordering(838) 00:08:53.124 fused_ordering(839) 00:08:53.124 fused_ordering(840) 00:08:53.124 fused_ordering(841) 00:08:53.124 fused_ordering(842) 00:08:53.124 fused_ordering(843) 00:08:53.124 fused_ordering(844) 00:08:53.124 fused_ordering(845) 00:08:53.124 fused_ordering(846) 00:08:53.124 fused_ordering(847) 00:08:53.124 fused_ordering(848) 00:08:53.124 fused_ordering(849) 00:08:53.124 fused_ordering(850) 00:08:53.124 fused_ordering(851) 00:08:53.124 fused_ordering(852) 00:08:53.124 fused_ordering(853) 00:08:53.124 fused_ordering(854) 00:08:53.124 fused_ordering(855) 00:08:53.124 fused_ordering(856) 00:08:53.124 fused_ordering(857) 00:08:53.124 fused_ordering(858) 00:08:53.124 fused_ordering(859) 00:08:53.124 fused_ordering(860) 00:08:53.124 fused_ordering(861) 00:08:53.124 fused_ordering(862) 00:08:53.124 fused_ordering(863) 00:08:53.124 fused_ordering(864) 00:08:53.124 fused_ordering(865) 00:08:53.124 fused_ordering(866) 00:08:53.124 fused_ordering(867) 00:08:53.124 fused_ordering(868) 00:08:53.124 fused_ordering(869) 00:08:53.124 fused_ordering(870) 00:08:53.124 fused_ordering(871) 00:08:53.124 fused_ordering(872) 00:08:53.124 fused_ordering(873) 00:08:53.124 fused_ordering(874) 00:08:53.124 fused_ordering(875) 00:08:53.124 fused_ordering(876) 00:08:53.124 fused_ordering(877) 00:08:53.124 fused_ordering(878) 00:08:53.124 fused_ordering(879) 00:08:53.124 fused_ordering(880) 00:08:53.124 fused_ordering(881) 00:08:53.124 fused_ordering(882) 00:08:53.124 fused_ordering(883) 00:08:53.124 fused_ordering(884) 00:08:53.124 fused_ordering(885) 00:08:53.124 fused_ordering(886) 00:08:53.124 fused_ordering(887) 00:08:53.124 fused_ordering(888) 00:08:53.124 fused_ordering(889) 00:08:53.124 fused_ordering(890) 00:08:53.124 fused_ordering(891) 00:08:53.124 fused_ordering(892) 00:08:53.124 fused_ordering(893) 00:08:53.124 fused_ordering(894) 00:08:53.124 fused_ordering(895) 00:08:53.124 fused_ordering(896) 00:08:53.124 fused_ordering(897) 00:08:53.124 fused_ordering(898) 00:08:53.124 fused_ordering(899) 00:08:53.124 fused_ordering(900) 00:08:53.124 fused_ordering(901) 00:08:53.124 fused_ordering(902) 00:08:53.124 fused_ordering(903) 00:08:53.124 fused_ordering(904) 00:08:53.124 fused_ordering(905) 00:08:53.124 fused_ordering(906) 00:08:53.124 fused_ordering(907) 00:08:53.124 fused_ordering(908) 00:08:53.124 fused_ordering(909) 00:08:53.124 fused_ordering(910) 00:08:53.124 fused_ordering(911) 00:08:53.124 fused_ordering(912) 00:08:53.124 fused_ordering(913) 00:08:53.124 fused_ordering(914) 00:08:53.124 fused_ordering(915) 00:08:53.124 fused_ordering(916) 00:08:53.124 fused_ordering(917) 00:08:53.124 fused_ordering(918) 00:08:53.124 fused_ordering(919) 00:08:53.124 fused_ordering(920) 00:08:53.124 fused_ordering(921) 00:08:53.124 fused_ordering(922) 00:08:53.124 fused_ordering(923) 00:08:53.124 fused_ordering(924) 00:08:53.124 fused_ordering(925) 00:08:53.124 fused_ordering(926) 00:08:53.124 fused_ordering(927) 00:08:53.124 fused_ordering(928) 00:08:53.124 fused_ordering(929) 00:08:53.124 fused_ordering(930) 00:08:53.124 fused_ordering(931) 00:08:53.124 fused_ordering(932) 00:08:53.124 fused_ordering(933) 00:08:53.124 fused_ordering(934) 00:08:53.124 fused_ordering(935) 00:08:53.124 fused_ordering(936) 00:08:53.124 fused_ordering(937) 00:08:53.124 fused_ordering(938) 00:08:53.124 fused_ordering(939) 00:08:53.124 fused_ordering(940) 00:08:53.124 fused_ordering(941) 00:08:53.124 fused_ordering(942) 00:08:53.124 fused_ordering(943) 00:08:53.124 fused_ordering(944) 00:08:53.124 fused_ordering(945) 00:08:53.124 fused_ordering(946) 00:08:53.124 fused_ordering(947) 00:08:53.124 fused_ordering(948) 00:08:53.124 fused_ordering(949) 00:08:53.124 fused_ordering(950) 00:08:53.124 fused_ordering(951) 00:08:53.124 fused_ordering(952) 00:08:53.124 fused_ordering(953) 00:08:53.124 fused_ordering(954) 00:08:53.124 fused_ordering(955) 00:08:53.124 fused_ordering(956) 00:08:53.124 fused_ordering(957) 00:08:53.124 fused_ordering(958) 00:08:53.124 fused_ordering(959) 00:08:53.124 fused_ordering(960) 00:08:53.124 fused_ordering(961) 00:08:53.124 fused_ordering(962) 00:08:53.125 fused_ordering(963) 00:08:53.125 fused_ordering(964) 00:08:53.125 fused_ordering(965) 00:08:53.125 fused_ordering(966) 00:08:53.125 fused_ordering(967) 00:08:53.125 fused_ordering(968) 00:08:53.125 fused_ordering(969) 00:08:53.125 fused_ordering(970) 00:08:53.125 fused_ordering(971) 00:08:53.125 fused_ordering(972) 00:08:53.125 fused_ordering(973) 00:08:53.125 fused_ordering(974) 00:08:53.125 fused_ordering(975) 00:08:53.125 fused_ordering(976) 00:08:53.125 fused_ordering(977) 00:08:53.125 fused_ordering(978) 00:08:53.125 fused_ordering(979) 00:08:53.125 fused_ordering(980) 00:08:53.125 fused_ordering(981) 00:08:53.125 fused_ordering(982) 00:08:53.125 fused_ordering(983) 00:08:53.125 fused_ordering(984) 00:08:53.125 fused_ordering(985) 00:08:53.125 fused_ordering(986) 00:08:53.125 fused_ordering(987) 00:08:53.125 fused_ordering(988) 00:08:53.125 fused_ordering(989) 00:08:53.125 fused_ordering(990) 00:08:53.125 fused_ordering(991) 00:08:53.125 fused_ordering(992) 00:08:53.125 fused_ordering(993) 00:08:53.125 fused_ordering(994) 00:08:53.125 fused_ordering(995) 00:08:53.125 fused_ordering(996) 00:08:53.125 fused_ordering(997) 00:08:53.125 fused_ordering(998) 00:08:53.125 fused_ordering(999) 00:08:53.125 fused_ordering(1000) 00:08:53.125 fused_ordering(1001) 00:08:53.125 fused_ordering(1002) 00:08:53.125 fused_ordering(1003) 00:08:53.125 fused_ordering(1004) 00:08:53.125 fused_ordering(1005) 00:08:53.125 fused_ordering(1006) 00:08:53.125 fused_ordering(1007) 00:08:53.125 fused_ordering(1008) 00:08:53.125 fused_ordering(1009) 00:08:53.125 fused_ordering(1010) 00:08:53.125 fused_ordering(1011) 00:08:53.125 fused_ordering(1012) 00:08:53.125 fused_ordering(1013) 00:08:53.125 fused_ordering(1014) 00:08:53.125 fused_ordering(1015) 00:08:53.125 fused_ordering(1016) 00:08:53.125 fused_ordering(1017) 00:08:53.125 fused_ordering(1018) 00:08:53.125 fused_ordering(1019) 00:08:53.125 fused_ordering(1020) 00:08:53.125 fused_ordering(1021) 00:08:53.125 fused_ordering(1022) 00:08:53.125 fused_ordering(1023) 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.125 rmmod nvme_tcp 00:08:53.125 rmmod nvme_fabrics 00:08:53.125 rmmod nvme_keyring 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71268 ']' 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71268 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71268 ']' 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71268 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71268 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:53.125 killing process with pid 71268 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71268' 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71268 00:08:53.125 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71268 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:53.384 00:08:53.384 real 0m3.956s 00:08:53.384 user 0m4.422s 00:08:53.384 sys 0m1.489s 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.384 18:28:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:53.384 ************************************ 00:08:53.384 END TEST nvmf_fused_ordering 00:08:53.384 ************************************ 00:08:53.384 18:28:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:53.384 18:28:15 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:53.384 18:28:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:53.385 18:28:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.385 18:28:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:53.385 ************************************ 00:08:53.385 START TEST nvmf_delete_subsystem 00:08:53.385 ************************************ 00:08:53.385 18:28:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:53.644 * Looking for test storage... 00:08:53.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:53.644 Cannot find device "nvmf_tgt_br" 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.644 Cannot find device "nvmf_tgt_br2" 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:53.644 Cannot find device "nvmf_tgt_br" 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:53.644 Cannot find device "nvmf_tgt_br2" 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:08:53.644 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:53.904 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:54.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:08:54.162 00:08:54.162 --- 10.0.0.2 ping statistics --- 00:08:54.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.162 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:54.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:08:54.162 00:08:54.162 --- 10.0.0.3 ping statistics --- 00:08:54.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.162 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:08:54.162 00:08:54.162 --- 10.0.0.1 ping statistics --- 00:08:54.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.162 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.162 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:54.163 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.163 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:54.163 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71527 00:08:54.163 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71527 00:08:54.163 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71527 ']' 00:08:54.163 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.163 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.163 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.163 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.163 18:28:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.163 [2024-07-15 18:28:16.629334] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:08:54.163 [2024-07-15 18:28:16.629449] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.440 [2024-07-15 18:28:16.776915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.440 [2024-07-15 18:28:16.873593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.440 [2024-07-15 18:28:16.873642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.440 [2024-07-15 18:28:16.873652] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.440 [2024-07-15 18:28:16.873660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.440 [2024-07-15 18:28:16.873667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.440 [2024-07-15 18:28:16.874266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.440 [2024-07-15 18:28:16.874266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.027 [2024-07-15 18:28:17.570736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.027 [2024-07-15 18:28:17.594887] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.027 NULL1 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.027 Delay0 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71578 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:55.027 18:28:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:55.286 [2024-07-15 18:28:17.811045] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:57.188 18:28:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.188 18:28:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.188 18:28:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 Write completed with error (sct=0, sc=8) 00:08:57.448 Read completed with error (sct=0, sc=8) 00:08:57.448 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 [2024-07-15 18:28:19.841133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21038d0 is same with the state(5) to be set 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 [2024-07-15 18:28:19.841647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2126a80 is same with the state(5) to be set 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 Write completed with error (sct=0, sc=8) 00:08:57.449 starting I/O failed: -6 00:08:57.449 Read completed with error (sct=0, sc=8) 00:08:57.449 [2024-07-15 18:28:19.845224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7570000c00 is same with the state(5) to be set 00:08:58.386 [2024-07-15 18:28:20.823926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2103510 is same with the state(5) to be set 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 [2024-07-15 18:28:20.840500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f757000cfe0 is same with the state(5) to be set 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 [2024-07-15 18:28:20.841274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21036f0 is same with the state(5) to be set 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 [2024-07-15 18:28:20.841441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21254c0 is same with the state(5) to be set 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Write completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.386 Read completed with error (sct=0, sc=8) 00:08:58.387 Read completed with error (sct=0, sc=8) 00:08:58.387 Write completed with error (sct=0, sc=8) 00:08:58.387 Read completed with error (sct=0, sc=8) 00:08:58.387 Write completed with error (sct=0, sc=8) 00:08:58.387 Read completed with error (sct=0, sc=8) 00:08:58.387 Read completed with error (sct=0, sc=8) 00:08:58.387 Read completed with error (sct=0, sc=8) 00:08:58.387 [2024-07-15 18:28:20.842125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f757000d740 is same with the state(5) to be set 00:08:58.387 Initializing NVMe Controllers 00:08:58.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:58.387 Controller IO queue size 128, less than required. 00:08:58.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:58.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:58.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:58.387 Initialization complete. Launching workers. 00:08:58.387 ======================================================== 00:08:58.387 Latency(us) 00:08:58.387 Device Information : IOPS MiB/s Average min max 00:08:58.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.52 0.08 898347.53 526.57 1009025.61 00:08:58.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 179.46 0.09 960007.24 507.16 2003259.25 00:08:58.387 ======================================================== 00:08:58.387 Total : 347.98 0.17 930146.33 507.16 2003259.25 00:08:58.387 00:08:58.387 [2024-07-15 18:28:20.843988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2103510 (9): Bad file descriptor 00:08:58.387 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:58.387 18:28:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.387 18:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:58.387 18:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71578 00:08:58.387 18:28:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71578 00:08:58.953 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71578) - No such process 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71578 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71578 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71578 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 [2024-07-15 18:28:21.378636] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71624 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71624 00:08:58.953 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:59.211 [2024-07-15 18:28:21.567414] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:59.469 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.469 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71624 00:08:59.469 18:28:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.038 18:28:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.038 18:28:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71624 00:09:00.038 18:28:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.606 18:28:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.606 18:28:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71624 00:09:00.606 18:28:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.862 18:28:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.862 18:28:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71624 00:09:00.862 18:28:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.428 18:28:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.428 18:28:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71624 00:09:01.428 18:28:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.995 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.995 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71624 00:09:01.995 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.995 Initializing NVMe Controllers 00:09:01.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:01.995 Controller IO queue size 128, less than required. 00:09:01.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:01.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:01.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:01.995 Initialization complete. Launching workers. 00:09:01.995 ======================================================== 00:09:01.995 Latency(us) 00:09:01.995 Device Information : IOPS MiB/s Average min max 00:09:01.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004190.28 1000127.65 1013674.67 00:09:01.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004286.43 1000137.16 1010972.96 00:09:01.995 ======================================================== 00:09:01.995 Total : 256.00 0.12 1004238.36 1000127.65 1013674.67 00:09:01.995 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71624 00:09:02.561 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71624) - No such process 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71624 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.561 18:28:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.561 rmmod nvme_tcp 00:09:02.561 rmmod nvme_fabrics 00:09:02.561 rmmod nvme_keyring 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71527 ']' 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71527 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71527 ']' 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71527 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71527 00:09:02.561 killing process with pid 71527 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71527' 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71527 00:09:02.561 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71527 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:02.883 00:09:02.883 real 0m9.370s 00:09:02.883 user 0m27.764s 00:09:02.883 sys 0m2.352s 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.883 ************************************ 00:09:02.883 END TEST nvmf_delete_subsystem 00:09:02.883 ************************************ 00:09:02.883 18:28:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.883 18:28:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:02.883 18:28:25 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:02.883 18:28:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.883 18:28:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.883 18:28:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.883 ************************************ 00:09:02.883 START TEST nvmf_ns_masking 00:09:02.883 ************************************ 00:09:02.883 18:28:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:03.141 * Looking for test storage... 00:09:03.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.141 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f7ad257d-ebbf-41c7-8529-a3e6ca8d25c2 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=988fb336-3111-4591-a429-e5ac2aaf3690 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=cf1c0d81-21bb-4cf4-9491-e3a6e09c2320 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:03.142 Cannot find device "nvmf_tgt_br" 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.142 Cannot find device "nvmf_tgt_br2" 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:03.142 Cannot find device "nvmf_tgt_br" 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:03.142 Cannot find device "nvmf_tgt_br2" 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:03.142 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:03.402 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:03.403 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:03.403 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:03.403 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:03.403 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:03.403 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:03.403 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:03.403 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:03.403 18:28:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:03.403 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:03.403 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:03.403 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:03.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:09:03.661 00:09:03.661 --- 10.0.0.2 ping statistics --- 00:09:03.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.661 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:03.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:03.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:03.661 00:09:03.661 --- 10.0.0.3 ping statistics --- 00:09:03.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.661 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:03.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:09:03.661 00:09:03.661 --- 10.0.0.1 ping statistics --- 00:09:03.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.661 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.661 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:03.662 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:03.662 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=71866 00:09:03.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.662 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 71866 00:09:03.662 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 71866 ']' 00:09:03.662 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.662 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.662 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.662 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.662 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:03.662 [2024-07-15 18:28:26.111400] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:09:03.662 [2024-07-15 18:28:26.111673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.662 [2024-07-15 18:28:26.253456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.919 [2024-07-15 18:28:26.349408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.919 [2024-07-15 18:28:26.349655] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.919 [2024-07-15 18:28:26.349864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.919 [2024-07-15 18:28:26.349910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.919 [2024-07-15 18:28:26.349977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.919 [2024-07-15 18:28:26.350016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.486 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.486 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:04.486 18:28:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:04.486 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.486 18:28:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:04.486 18:28:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.486 18:28:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:04.745 [2024-07-15 18:28:27.217161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.745 18:28:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:04.745 18:28:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:04.745 18:28:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:05.004 Malloc1 00:09:05.004 18:28:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:05.262 Malloc2 00:09:05.262 18:28:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:05.262 18:28:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:05.521 18:28:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.779 [2024-07-15 18:28:28.277194] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.779 18:28:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:05.779 18:28:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cf1c0d81-21bb-4cf4-9491-e3a6e09c2320 -a 10.0.0.2 -s 4420 -i 4 00:09:06.038 18:28:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.038 18:28:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:06.038 18:28:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.038 18:28:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:06.038 18:28:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:07.940 [ 0]:0x1 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bde4959e0ef04158a067939f558d8f0f 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bde4959e0ef04158a067939f558d8f0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:07.940 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:08.200 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:08.200 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:08.200 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:08.200 [ 0]:0x1 00:09:08.200 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.200 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:08.200 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bde4959e0ef04158a067939f558d8f0f 00:09:08.200 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bde4959e0ef04158a067939f558d8f0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.200 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:08.200 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:08.459 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:08.459 [ 1]:0x2 00:09:08.459 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:08.459 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.459 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7900fafd1aa475aab3ce0d5dc6fc372 00:09:08.459 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7900fafd1aa475aab3ce0d5dc6fc372 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.459 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:08.459 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.459 18:28:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.718 18:28:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:08.977 18:28:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:08.977 18:28:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cf1c0d81-21bb-4cf4-9491-e3a6e09c2320 -a 10.0.0.2 -s 4420 -i 4 00:09:08.977 18:28:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:08.977 18:28:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:08.977 18:28:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.977 18:28:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:08.977 18:28:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:08.977 18:28:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:10.879 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:10.879 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:10.879 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.879 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:10.879 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.879 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:10.879 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:10.879 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.138 [ 0]:0x2 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7900fafd1aa475aab3ce0d5dc6fc372 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7900fafd1aa475aab3ce0d5dc6fc372 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.138 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:11.396 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:11.396 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.396 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:11.396 [ 0]:0x1 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bde4959e0ef04158a067939f558d8f0f 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bde4959e0ef04158a067939f558d8f0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:11.397 [ 1]:0x2 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7900fafd1aa475aab3ce0d5dc6fc372 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7900fafd1aa475aab3ce0d5dc6fc372 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.397 18:28:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:11.655 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:11.912 [ 0]:0x2 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7900fafd1aa475aab3ce0d5dc6fc372 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7900fafd1aa475aab3ce0d5dc6fc372 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.912 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:12.169 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:12.169 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cf1c0d81-21bb-4cf4-9491-e3a6e09c2320 -a 10.0.0.2 -s 4420 -i 4 00:09:12.169 18:28:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:12.170 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:12.170 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.170 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:12.170 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:12.170 18:28:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:14.693 [ 0]:0x1 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bde4959e0ef04158a067939f558d8f0f 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bde4959e0ef04158a067939f558d8f0f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:14.693 [ 1]:0x2 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7900fafd1aa475aab3ce0d5dc6fc372 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7900fafd1aa475aab3ce0d5dc6fc372 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.693 18:28:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:14.693 [ 0]:0x2 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7900fafd1aa475aab3ce0d5dc6fc372 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7900fafd1aa475aab3ce0d5dc6fc372 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:14.693 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:14.694 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.694 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.694 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.694 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.694 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.694 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.694 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.694 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:14.694 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:14.952 [2024-07-15 18:28:37.393884] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:14.952 2024/07/15 18:28:37 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:09:14.952 request: 00:09:14.952 { 00:09:14.952 "method": "nvmf_ns_remove_host", 00:09:14.952 "params": { 00:09:14.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.952 "nsid": 2, 00:09:14.952 "host": "nqn.2016-06.io.spdk:host1" 00:09:14.952 } 00:09:14.952 } 00:09:14.952 Got JSON-RPC error response 00:09:14.952 GoRPCClient: error on JSON-RPC call 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:14.952 [ 0]:0x2 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7900fafd1aa475aab3ce0d5dc6fc372 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7900fafd1aa475aab3ce0d5dc6fc372 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:14.952 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:15.227 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:15.227 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72234 00:09:15.227 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.227 18:28:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72234 /var/tmp/host.sock 00:09:15.227 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72234 ']' 00:09:15.227 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:15.227 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.227 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:15.227 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.227 18:28:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:15.227 [2024-07-15 18:28:37.621932] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:09:15.227 [2024-07-15 18:28:37.622033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72234 ] 00:09:15.227 [2024-07-15 18:28:37.768455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.498 [2024-07-15 18:28:37.869467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.065 18:28:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.065 18:28:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:16.065 18:28:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.324 18:28:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.324 18:28:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f7ad257d-ebbf-41c7-8529-a3e6ca8d25c2 00:09:16.324 18:28:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:16.324 18:28:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F7AD257DEBBF41C78529A3E6CA8D25C2 -i 00:09:16.583 18:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 988fb336-3111-4591-a429-e5ac2aaf3690 00:09:16.583 18:28:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:16.583 18:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 988FB33631114591A429E5AC2AAF3690 -i 00:09:16.841 18:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:17.097 18:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:17.353 18:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:17.353 18:28:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:17.610 nvme0n1 00:09:17.610 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:17.610 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:17.869 nvme1n2 00:09:17.869 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:17.869 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:17.869 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:17.869 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:17.869 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:18.127 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:18.127 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:18.127 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:18.127 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:18.385 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f7ad257d-ebbf-41c7-8529-a3e6ca8d25c2 == \f\7\a\d\2\5\7\d\-\e\b\b\f\-\4\1\c\7\-\8\5\2\9\-\a\3\e\6\c\a\8\d\2\5\c\2 ]] 00:09:18.385 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:18.385 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:18.385 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:18.385 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 988fb336-3111-4591-a429-e5ac2aaf3690 == \9\8\8\f\b\3\3\6\-\3\1\1\1\-\4\5\9\1\-\a\4\2\9\-\e\5\a\c\2\a\a\f\3\6\9\0 ]] 00:09:18.385 18:28:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72234 00:09:18.385 18:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72234 ']' 00:09:18.385 18:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72234 00:09:18.386 18:28:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:18.643 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.643 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72234 00:09:18.643 killing process with pid 72234 00:09:18.643 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:18.643 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:18.643 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72234' 00:09:18.643 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72234 00:09:18.643 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72234 00:09:18.901 18:28:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.160 rmmod nvme_tcp 00:09:19.160 rmmod nvme_fabrics 00:09:19.160 rmmod nvme_keyring 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 71866 ']' 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 71866 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 71866 ']' 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 71866 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71866 00:09:19.160 killing process with pid 71866 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71866' 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 71866 00:09:19.160 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 71866 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:19.418 00:09:19.418 real 0m16.553s 00:09:19.418 user 0m24.235s 00:09:19.418 sys 0m3.436s 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.418 18:28:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:19.418 ************************************ 00:09:19.418 END TEST nvmf_ns_masking 00:09:19.418 ************************************ 00:09:19.418 18:28:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:19.418 18:28:42 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:09:19.418 18:28:42 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:09:19.418 18:28:42 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:19.418 18:28:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:19.418 18:28:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.418 18:28:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:19.676 ************************************ 00:09:19.676 START TEST nvmf_host_management 00:09:19.677 ************************************ 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:19.677 * Looking for test storage... 00:09:19.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:19.677 Cannot find device "nvmf_tgt_br" 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.677 Cannot find device "nvmf_tgt_br2" 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:19.677 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:19.936 Cannot find device "nvmf_tgt_br" 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:19.936 Cannot find device "nvmf_tgt_br2" 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:19.936 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:20.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:09:20.195 00:09:20.195 --- 10.0.0.2 ping statistics --- 00:09:20.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.195 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:20.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:20.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:09:20.195 00:09:20.195 --- 10.0.0.3 ping statistics --- 00:09:20.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.195 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:20.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:20.195 00:09:20.195 --- 10.0.0.1 ping statistics --- 00:09:20.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.195 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72589 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72589 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72589 ']' 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.195 18:28:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:20.195 [2024-07-15 18:28:42.714202] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:09:20.195 [2024-07-15 18:28:42.714275] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.454 [2024-07-15 18:28:42.841899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.454 [2024-07-15 18:28:42.943675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.454 [2024-07-15 18:28:42.943728] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.454 [2024-07-15 18:28:42.943738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.454 [2024-07-15 18:28:42.943747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.454 [2024-07-15 18:28:42.943753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.454 [2024-07-15 18:28:42.943961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.454 [2024-07-15 18:28:42.944302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.454 [2024-07-15 18:28:42.944303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:20.454 [2024-07-15 18:28:42.944877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.019 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.019 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:21.019 18:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.019 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.019 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.019 18:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.019 18:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.019 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.019 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.019 [2024-07-15 18:28:43.624932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.277 Malloc0 00:09:21.277 [2024-07-15 18:28:43.696055] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72661 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72661 /var/tmp/bdevperf.sock 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72661 ']' 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:21.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:21.277 { 00:09:21.277 "params": { 00:09:21.277 "name": "Nvme$subsystem", 00:09:21.277 "trtype": "$TEST_TRANSPORT", 00:09:21.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.277 "adrfam": "ipv4", 00:09:21.277 "trsvcid": "$NVMF_PORT", 00:09:21.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.277 "hdgst": ${hdgst:-false}, 00:09:21.277 "ddgst": ${ddgst:-false} 00:09:21.277 }, 00:09:21.277 "method": "bdev_nvme_attach_controller" 00:09:21.277 } 00:09:21.277 EOF 00:09:21.277 )") 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:21.277 18:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:21.277 "params": { 00:09:21.277 "name": "Nvme0", 00:09:21.277 "trtype": "tcp", 00:09:21.277 "traddr": "10.0.0.2", 00:09:21.277 "adrfam": "ipv4", 00:09:21.277 "trsvcid": "4420", 00:09:21.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:21.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:21.277 "hdgst": false, 00:09:21.277 "ddgst": false 00:09:21.277 }, 00:09:21.277 "method": "bdev_nvme_attach_controller" 00:09:21.277 }' 00:09:21.277 [2024-07-15 18:28:43.805260] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:09:21.277 [2024-07-15 18:28:43.805338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72661 ] 00:09:21.535 [2024-07-15 18:28:43.950196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.535 [2024-07-15 18:28:44.040603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.794 Running I/O for 10 seconds... 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:22.094 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:22.354 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.354 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:09:22.354 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:09:22.354 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:22.354 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:22.354 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:22.354 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:22.354 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.354 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:22.354 [2024-07-15 18:28:44.735309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4310 is same with the state(5) to be set 00:09:22.354 [2024-07-15 18:28:44.735355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:22.354 [2024-07-15 18:28:44.735383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.354 [2024-07-15 18:28:44.735394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:22.354 [2024-07-15 18:28:44.735403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.354 [2024-07-15 18:28:44.735413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:22.354 [2024-07-15 18:28:44.735421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.354 [2024-07-15 18:28:44.735431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:22.354 [2024-07-15 18:28:44.735439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.735447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb2af0 is same with the state(5) to be set 00:09:22.355 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.355 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:22.355 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.355 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:22.355 [2024-07-15 18:28:44.742184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.355 [2024-07-15 18:28:44.742973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.355 [2024-07-15 18:28:44.742983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.742992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:22.356 [2024-07-15 18:28:44.743455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:22.356 [2024-07-15 18:28:44.743521] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbb2820 was disconnected and freed. reset controller. 00:09:22.356 [2024-07-15 18:28:44.744452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:22.356 task offset: 16384 on job bdev=Nvme0n1 fails 00:09:22.356 00:09:22.356 Latency(us) 00:09:22.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.356 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:22.356 Job: Nvme0n1 ended in about 0.55 seconds with error 00:09:22.356 Verification LBA range: start 0x0 length 0x400 00:09:22.356 Nvme0n1 : 0.55 2104.89 131.56 116.94 0.00 28167.58 1625.24 25582.73 00:09:22.356 =================================================================================================================== 00:09:22.356 Total : 2104.89 131.56 116.94 0.00 28167.58 1625.24 25582.73 00:09:22.356 [2024-07-15 18:28:44.746250] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.356 [2024-07-15 18:28:44.746270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb2af0 (9): Bad file descriptor 00:09:22.356 18:28:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.356 18:28:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:22.356 [2024-07-15 18:28:44.756279] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72661 00:09:23.293 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72661) - No such process 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:23.293 { 00:09:23.293 "params": { 00:09:23.293 "name": "Nvme$subsystem", 00:09:23.293 "trtype": "$TEST_TRANSPORT", 00:09:23.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.293 "adrfam": "ipv4", 00:09:23.293 "trsvcid": "$NVMF_PORT", 00:09:23.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.293 "hdgst": ${hdgst:-false}, 00:09:23.293 "ddgst": ${ddgst:-false} 00:09:23.293 }, 00:09:23.293 "method": "bdev_nvme_attach_controller" 00:09:23.293 } 00:09:23.293 EOF 00:09:23.293 )") 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:23.293 18:28:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:23.293 "params": { 00:09:23.293 "name": "Nvme0", 00:09:23.293 "trtype": "tcp", 00:09:23.293 "traddr": "10.0.0.2", 00:09:23.293 "adrfam": "ipv4", 00:09:23.293 "trsvcid": "4420", 00:09:23.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:23.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:23.293 "hdgst": false, 00:09:23.293 "ddgst": false 00:09:23.293 }, 00:09:23.293 "method": "bdev_nvme_attach_controller" 00:09:23.293 }' 00:09:23.293 [2024-07-15 18:28:45.817033] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:09:23.293 [2024-07-15 18:28:45.817445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72712 ] 00:09:23.552 [2024-07-15 18:28:45.962839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.552 [2024-07-15 18:28:46.049946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.811 Running I/O for 1 seconds... 00:09:24.747 00:09:24.747 Latency(us) 00:09:24.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.747 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:24.747 Verification LBA range: start 0x0 length 0x400 00:09:24.747 Nvme0n1 : 1.02 2199.68 137.48 0.00 0.00 28625.90 4132.19 25898.56 00:09:24.747 =================================================================================================================== 00:09:24.747 Total : 2199.68 137.48 0.00 0.00 28625.90 4132.19 25898.56 00:09:25.006 18:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.007 rmmod nvme_tcp 00:09:25.007 rmmod nvme_fabrics 00:09:25.007 rmmod nvme_keyring 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72589 ']' 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72589 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72589 ']' 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72589 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72589 00:09:25.007 killing process with pid 72589 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72589' 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72589 00:09:25.007 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72589 00:09:25.265 [2024-07-15 18:28:47.747841] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:25.265 00:09:25.265 real 0m5.797s 00:09:25.265 user 0m21.628s 00:09:25.265 sys 0m1.552s 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.265 18:28:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:25.265 ************************************ 00:09:25.265 END TEST nvmf_host_management 00:09:25.265 ************************************ 00:09:25.525 18:28:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:25.525 18:28:47 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:25.525 18:28:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:25.525 18:28:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.525 18:28:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.525 ************************************ 00:09:25.525 START TEST nvmf_lvol 00:09:25.525 ************************************ 00:09:25.525 18:28:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:25.525 * Looking for test storage... 00:09:25.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:25.525 Cannot find device "nvmf_tgt_br" 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:25.525 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:25.785 Cannot find device "nvmf_tgt_br2" 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:25.785 Cannot find device "nvmf_tgt_br" 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:25.785 Cannot find device "nvmf_tgt_br2" 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:25.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:25.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:25.785 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:26.043 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:26.043 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:26.043 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:26.043 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:26.043 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:26.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:09:26.043 00:09:26.043 --- 10.0.0.2 ping statistics --- 00:09:26.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.043 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:26.043 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:26.043 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:26.043 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:09:26.043 00:09:26.043 --- 10.0.0.3 ping statistics --- 00:09:26.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.043 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:26.043 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:26.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:09:26.044 00:09:26.044 --- 10.0.0.1 ping statistics --- 00:09:26.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.044 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=72916 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 72916 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 72916 ']' 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.044 18:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:26.044 [2024-07-15 18:28:48.548864] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:09:26.044 [2024-07-15 18:28:48.548936] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.301 [2024-07-15 18:28:48.690419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.301 [2024-07-15 18:28:48.784015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.301 [2024-07-15 18:28:48.784056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.301 [2024-07-15 18:28:48.784065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.301 [2024-07-15 18:28:48.784073] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.301 [2024-07-15 18:28:48.784080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.301 [2024-07-15 18:28:48.785174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.301 [2024-07-15 18:28:48.785275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.301 [2024-07-15 18:28:48.785277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.868 18:28:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.868 18:28:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:26.868 18:28:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.868 18:28:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.868 18:28:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:26.868 18:28:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.868 18:28:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:27.126 [2024-07-15 18:28:49.626705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.126 18:28:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.384 18:28:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:27.384 18:28:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.642 18:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:27.642 18:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:27.900 18:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:28.158 18:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a16e3ce8-64d6-4528-a2e9-e2d2b3d6cc31 00:09:28.158 18:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a16e3ce8-64d6-4528-a2e9-e2d2b3d6cc31 lvol 20 00:09:28.418 18:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0f3964d8-6de2-4ec2-a025-d3624806fab1 00:09:28.418 18:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:28.418 18:28:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0f3964d8-6de2-4ec2-a025-d3624806fab1 00:09:28.676 18:28:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:28.935 [2024-07-15 18:28:51.392355] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.935 18:28:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:29.194 18:28:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73058 00:09:29.194 18:28:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:29.194 18:28:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:30.130 18:28:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 0f3964d8-6de2-4ec2-a025-d3624806fab1 MY_SNAPSHOT 00:09:30.388 18:28:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4e3e60a8-11e0-44c3-9cd8-e77daadbde2a 00:09:30.388 18:28:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 0f3964d8-6de2-4ec2-a025-d3624806fab1 30 00:09:30.647 18:28:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4e3e60a8-11e0-44c3-9cd8-e77daadbde2a MY_CLONE 00:09:30.904 18:28:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c8a7dc4e-9321-43bc-b133-89e3d0aecd83 00:09:30.904 18:28:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c8a7dc4e-9321-43bc-b133-89e3d0aecd83 00:09:31.472 18:28:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73058 00:09:39.586 Initializing NVMe Controllers 00:09:39.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:39.586 Controller IO queue size 128, less than required. 00:09:39.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:39.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:39.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:39.586 Initialization complete. Launching workers. 00:09:39.586 ======================================================== 00:09:39.586 Latency(us) 00:09:39.586 Device Information : IOPS MiB/s Average min max 00:09:39.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12500.30 48.83 10242.62 1998.86 40450.80 00:09:39.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12538.60 48.98 10212.31 2145.73 97702.72 00:09:39.586 ======================================================== 00:09:39.586 Total : 25038.90 97.81 10227.44 1998.86 97702.72 00:09:39.586 00:09:39.586 18:29:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:39.586 18:29:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0f3964d8-6de2-4ec2-a025-d3624806fab1 00:09:39.845 18:29:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a16e3ce8-64d6-4528-a2e9-e2d2b3d6cc31 00:09:40.103 18:29:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:40.103 18:29:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:40.103 18:29:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:40.103 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.103 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:40.103 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.103 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.104 rmmod nvme_tcp 00:09:40.104 rmmod nvme_fabrics 00:09:40.104 rmmod nvme_keyring 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 72916 ']' 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 72916 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 72916 ']' 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 72916 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72916 00:09:40.104 killing process with pid 72916 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72916' 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 72916 00:09:40.104 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 72916 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:40.362 ************************************ 00:09:40.362 END TEST nvmf_lvol 00:09:40.362 ************************************ 00:09:40.362 00:09:40.362 real 0m15.054s 00:09:40.362 user 1m1.394s 00:09:40.362 sys 0m5.096s 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.362 18:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:40.621 18:29:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:40.621 18:29:03 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:40.621 18:29:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:40.621 18:29:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.621 18:29:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.621 ************************************ 00:09:40.621 START TEST nvmf_lvs_grow 00:09:40.621 ************************************ 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:40.621 * Looking for test storage... 00:09:40.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.621 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.622 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:40.892 Cannot find device "nvmf_tgt_br" 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.892 Cannot find device "nvmf_tgt_br2" 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:40.892 Cannot find device "nvmf_tgt_br" 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:40.892 Cannot find device "nvmf_tgt_br2" 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:40.892 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:41.156 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:41.156 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:41.156 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:41.156 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.156 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.156 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.156 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:41.156 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:41.156 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.156 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:41.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:09:41.157 00:09:41.157 --- 10.0.0.2 ping statistics --- 00:09:41.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.157 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:41.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:41.157 00:09:41.157 --- 10.0.0.3 ping statistics --- 00:09:41.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.157 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:09:41.157 00:09:41.157 --- 10.0.0.1 ping statistics --- 00:09:41.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.157 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73422 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73422 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73422 ']' 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.157 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:41.157 [2024-07-15 18:29:03.704025] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:09:41.157 [2024-07-15 18:29:03.704095] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.416 [2024-07-15 18:29:03.846620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.416 [2024-07-15 18:29:03.938132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.416 [2024-07-15 18:29:03.938178] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.416 [2024-07-15 18:29:03.938188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.416 [2024-07-15 18:29:03.938196] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.416 [2024-07-15 18:29:03.938202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.416 [2024-07-15 18:29:03.938234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.984 18:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.984 18:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:41.984 18:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.984 18:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.984 18:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:42.243 [2024-07-15 18:29:04.804113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:42.243 ************************************ 00:09:42.243 START TEST lvs_grow_clean 00:09:42.243 ************************************ 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:42.243 18:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:42.502 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:42.502 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:42.761 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:42.761 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:42.761 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:43.020 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:43.020 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:43.020 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 446405f7-fd0b-4fe6-9890-ac58a1581519 lvol 150 00:09:43.278 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fe2123a4-c44c-4e17-8bc3-de5ab41b9b07 00:09:43.278 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.278 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:43.537 [2024-07-15 18:29:05.914754] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:43.537 [2024-07-15 18:29:05.914817] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:43.537 true 00:09:43.537 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:43.537 18:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:43.537 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:43.537 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:43.797 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fe2123a4-c44c-4e17-8bc3-de5ab41b9b07 00:09:44.055 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:44.313 [2024-07-15 18:29:06.718085] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.313 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:44.571 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73578 00:09:44.571 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:44.571 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:44.571 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73578 /var/tmp/bdevperf.sock 00:09:44.571 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73578 ']' 00:09:44.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:44.571 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:44.571 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.571 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:44.571 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.571 18:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:44.571 [2024-07-15 18:29:06.991117] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:09:44.571 [2024-07-15 18:29:06.991213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73578 ] 00:09:44.571 [2024-07-15 18:29:07.133123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.828 [2024-07-15 18:29:07.222117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.394 18:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.394 18:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:09:45.394 18:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:45.696 Nvme0n1 00:09:45.696 18:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:45.953 [ 00:09:45.953 { 00:09:45.953 "aliases": [ 00:09:45.953 "fe2123a4-c44c-4e17-8bc3-de5ab41b9b07" 00:09:45.953 ], 00:09:45.953 "assigned_rate_limits": { 00:09:45.953 "r_mbytes_per_sec": 0, 00:09:45.953 "rw_ios_per_sec": 0, 00:09:45.953 "rw_mbytes_per_sec": 0, 00:09:45.953 "w_mbytes_per_sec": 0 00:09:45.953 }, 00:09:45.953 "block_size": 4096, 00:09:45.954 "claimed": false, 00:09:45.954 "driver_specific": { 00:09:45.954 "mp_policy": "active_passive", 00:09:45.954 "nvme": [ 00:09:45.954 { 00:09:45.954 "ctrlr_data": { 00:09:45.954 "ana_reporting": false, 00:09:45.954 "cntlid": 1, 00:09:45.954 "firmware_revision": "24.09", 00:09:45.954 "model_number": "SPDK bdev Controller", 00:09:45.954 "multi_ctrlr": true, 00:09:45.954 "oacs": { 00:09:45.954 "firmware": 0, 00:09:45.954 "format": 0, 00:09:45.954 "ns_manage": 0, 00:09:45.954 "security": 0 00:09:45.954 }, 00:09:45.954 "serial_number": "SPDK0", 00:09:45.954 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:45.954 "vendor_id": "0x8086" 00:09:45.954 }, 00:09:45.954 "ns_data": { 00:09:45.954 "can_share": true, 00:09:45.954 "id": 1 00:09:45.954 }, 00:09:45.954 "trid": { 00:09:45.954 "adrfam": "IPv4", 00:09:45.954 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:45.954 "traddr": "10.0.0.2", 00:09:45.954 "trsvcid": "4420", 00:09:45.954 "trtype": "TCP" 00:09:45.954 }, 00:09:45.954 "vs": { 00:09:45.954 "nvme_version": "1.3" 00:09:45.954 } 00:09:45.954 } 00:09:45.954 ] 00:09:45.954 }, 00:09:45.954 "memory_domains": [ 00:09:45.954 { 00:09:45.954 "dma_device_id": "system", 00:09:45.954 "dma_device_type": 1 00:09:45.954 } 00:09:45.954 ], 00:09:45.954 "name": "Nvme0n1", 00:09:45.954 "num_blocks": 38912, 00:09:45.954 "product_name": "NVMe disk", 00:09:45.954 "supported_io_types": { 00:09:45.954 "abort": true, 00:09:45.954 "compare": true, 00:09:45.954 "compare_and_write": true, 00:09:45.954 "copy": true, 00:09:45.954 "flush": true, 00:09:45.954 "get_zone_info": false, 00:09:45.954 "nvme_admin": true, 00:09:45.954 "nvme_io": true, 00:09:45.954 "nvme_io_md": false, 00:09:45.954 "nvme_iov_md": false, 00:09:45.954 "read": true, 00:09:45.954 "reset": true, 00:09:45.954 "seek_data": false, 00:09:45.954 "seek_hole": false, 00:09:45.954 "unmap": true, 00:09:45.954 "write": true, 00:09:45.954 "write_zeroes": true, 00:09:45.954 "zcopy": false, 00:09:45.954 "zone_append": false, 00:09:45.954 "zone_management": false 00:09:45.954 }, 00:09:45.954 "uuid": "fe2123a4-c44c-4e17-8bc3-de5ab41b9b07", 00:09:45.954 "zoned": false 00:09:45.954 } 00:09:45.954 ] 00:09:45.954 18:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73626 00:09:45.954 18:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:45.954 18:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:45.954 Running I/O for 10 seconds... 00:09:46.886 Latency(us) 00:09:46.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.886 Nvme0n1 : 1.00 11584.00 45.25 0.00 0.00 0.00 0.00 0.00 00:09:46.886 =================================================================================================================== 00:09:46.886 Total : 11584.00 45.25 0.00 0.00 0.00 0.00 0.00 00:09:46.886 00:09:47.820 18:29:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:47.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.820 Nvme0n1 : 2.00 11621.00 45.39 0.00 0.00 0.00 0.00 0.00 00:09:47.820 =================================================================================================================== 00:09:47.820 Total : 11621.00 45.39 0.00 0.00 0.00 0.00 0.00 00:09:47.820 00:09:48.079 true 00:09:48.079 18:29:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:48.079 18:29:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:48.337 18:29:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:48.337 18:29:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:48.337 18:29:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73626 00:09:48.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.906 Nvme0n1 : 3.00 11508.67 44.96 0.00 0.00 0.00 0.00 0.00 00:09:48.906 =================================================================================================================== 00:09:48.906 Total : 11508.67 44.96 0.00 0.00 0.00 0.00 0.00 00:09:48.906 00:09:49.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.853 Nvme0n1 : 4.00 11405.75 44.55 0.00 0.00 0.00 0.00 0.00 00:09:49.853 =================================================================================================================== 00:09:49.853 Total : 11405.75 44.55 0.00 0.00 0.00 0.00 0.00 00:09:49.853 00:09:51.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.228 Nvme0n1 : 5.00 11286.40 44.09 0.00 0.00 0.00 0.00 0.00 00:09:51.228 =================================================================================================================== 00:09:51.228 Total : 11286.40 44.09 0.00 0.00 0.00 0.00 0.00 00:09:51.228 00:09:52.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.164 Nvme0n1 : 6.00 11232.83 43.88 0.00 0.00 0.00 0.00 0.00 00:09:52.164 =================================================================================================================== 00:09:52.164 Total : 11232.83 43.88 0.00 0.00 0.00 0.00 0.00 00:09:52.164 00:09:53.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.135 Nvme0n1 : 7.00 11161.43 43.60 0.00 0.00 0.00 0.00 0.00 00:09:53.135 =================================================================================================================== 00:09:53.135 Total : 11161.43 43.60 0.00 0.00 0.00 0.00 0.00 00:09:53.135 00:09:54.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.069 Nvme0n1 : 8.00 11094.25 43.34 0.00 0.00 0.00 0.00 0.00 00:09:54.069 =================================================================================================================== 00:09:54.069 Total : 11094.25 43.34 0.00 0.00 0.00 0.00 0.00 00:09:54.069 00:09:55.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.003 Nvme0n1 : 9.00 11044.33 43.14 0.00 0.00 0.00 0.00 0.00 00:09:55.003 =================================================================================================================== 00:09:55.003 Total : 11044.33 43.14 0.00 0.00 0.00 0.00 0.00 00:09:55.003 00:09:55.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.952 Nvme0n1 : 10.00 11014.90 43.03 0.00 0.00 0.00 0.00 0.00 00:09:55.952 =================================================================================================================== 00:09:55.952 Total : 11014.90 43.03 0.00 0.00 0.00 0.00 0.00 00:09:55.952 00:09:55.952 00:09:55.952 Latency(us) 00:09:55.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.952 Nvme0n1 : 10.01 11014.42 43.03 0.00 0.00 11617.05 5027.06 25266.89 00:09:55.952 =================================================================================================================== 00:09:55.952 Total : 11014.42 43.03 0.00 0.00 11617.05 5027.06 25266.89 00:09:55.952 0 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73578 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73578 ']' 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73578 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73578 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:55.952 killing process with pid 73578 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73578' 00:09:55.952 Received shutdown signal, test time was about 10.000000 seconds 00:09:55.952 00:09:55.952 Latency(us) 00:09:55.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.952 =================================================================================================================== 00:09:55.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73578 00:09:55.952 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73578 00:09:56.210 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:56.468 18:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:56.726 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:56.726 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:56.726 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:56.726 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:56.726 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:56.985 [2024-07-15 18:29:19.474315] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:56.985 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:57.244 2024/07/15 18:29:19 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:446405f7-fd0b-4fe6-9890-ac58a1581519], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:57.244 request: 00:09:57.244 { 00:09:57.244 "method": "bdev_lvol_get_lvstores", 00:09:57.244 "params": { 00:09:57.244 "uuid": "446405f7-fd0b-4fe6-9890-ac58a1581519" 00:09:57.244 } 00:09:57.244 } 00:09:57.244 Got JSON-RPC error response 00:09:57.244 GoRPCClient: error on JSON-RPC call 00:09:57.244 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:57.244 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:57.244 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:57.244 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:57.244 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:57.503 aio_bdev 00:09:57.503 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fe2123a4-c44c-4e17-8bc3-de5ab41b9b07 00:09:57.503 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=fe2123a4-c44c-4e17-8bc3-de5ab41b9b07 00:09:57.503 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:57.503 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:57.503 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:57.503 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:57.503 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:57.762 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe2123a4-c44c-4e17-8bc3-de5ab41b9b07 -t 2000 00:09:57.762 [ 00:09:57.762 { 00:09:57.762 "aliases": [ 00:09:57.762 "lvs/lvol" 00:09:57.762 ], 00:09:57.762 "assigned_rate_limits": { 00:09:57.762 "r_mbytes_per_sec": 0, 00:09:57.762 "rw_ios_per_sec": 0, 00:09:57.762 "rw_mbytes_per_sec": 0, 00:09:57.762 "w_mbytes_per_sec": 0 00:09:57.762 }, 00:09:57.762 "block_size": 4096, 00:09:57.762 "claimed": false, 00:09:57.762 "driver_specific": { 00:09:57.762 "lvol": { 00:09:57.762 "base_bdev": "aio_bdev", 00:09:57.762 "clone": false, 00:09:57.762 "esnap_clone": false, 00:09:57.762 "lvol_store_uuid": "446405f7-fd0b-4fe6-9890-ac58a1581519", 00:09:57.762 "num_allocated_clusters": 38, 00:09:57.762 "snapshot": false, 00:09:57.762 "thin_provision": false 00:09:57.762 } 00:09:57.762 }, 00:09:57.762 "name": "fe2123a4-c44c-4e17-8bc3-de5ab41b9b07", 00:09:57.762 "num_blocks": 38912, 00:09:57.762 "product_name": "Logical Volume", 00:09:57.762 "supported_io_types": { 00:09:57.762 "abort": false, 00:09:57.762 "compare": false, 00:09:57.762 "compare_and_write": false, 00:09:57.762 "copy": false, 00:09:57.762 "flush": false, 00:09:57.762 "get_zone_info": false, 00:09:57.762 "nvme_admin": false, 00:09:57.762 "nvme_io": false, 00:09:57.762 "nvme_io_md": false, 00:09:57.762 "nvme_iov_md": false, 00:09:57.762 "read": true, 00:09:57.762 "reset": true, 00:09:57.762 "seek_data": true, 00:09:57.762 "seek_hole": true, 00:09:57.762 "unmap": true, 00:09:57.762 "write": true, 00:09:57.762 "write_zeroes": true, 00:09:57.762 "zcopy": false, 00:09:57.762 "zone_append": false, 00:09:57.762 "zone_management": false 00:09:57.762 }, 00:09:57.762 "uuid": "fe2123a4-c44c-4e17-8bc3-de5ab41b9b07", 00:09:57.762 "zoned": false 00:09:57.762 } 00:09:57.762 ] 00:09:57.762 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:57.762 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:57.762 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:58.021 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:58.021 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:58.021 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:58.320 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:58.320 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fe2123a4-c44c-4e17-8bc3-de5ab41b9b07 00:09:58.605 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 446405f7-fd0b-4fe6-9890-ac58a1581519 00:09:58.863 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:58.863 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:59.431 ************************************ 00:09:59.431 END TEST lvs_grow_clean 00:09:59.431 ************************************ 00:09:59.431 00:09:59.431 real 0m17.074s 00:09:59.431 user 0m15.487s 00:09:59.431 sys 0m2.774s 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:59.431 ************************************ 00:09:59.431 START TEST lvs_grow_dirty 00:09:59.431 ************************************ 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:59.431 18:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:59.431 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:59.690 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:59.690 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:59.948 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:09:59.948 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:09:59.948 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:00.236 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:00.236 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:00.236 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 lvol 150 00:10:00.494 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9090c891-a1d6-47ed-a175-6ee6fc92006d 00:10:00.494 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:00.494 18:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:00.752 [2024-07-15 18:29:23.133960] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:00.752 [2024-07-15 18:29:23.134025] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:00.752 true 00:10:00.752 18:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:00.752 18:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:00.752 18:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:00.752 18:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:01.010 18:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9090c891-a1d6-47ed-a175-6ee6fc92006d 00:10:01.268 18:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:01.526 [2024-07-15 18:29:23.906130] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.526 18:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.526 18:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74012 00:10:01.526 18:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:01.526 18:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:01.526 18:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74012 /var/tmp/bdevperf.sock 00:10:01.526 18:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74012 ']' 00:10:01.526 18:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:01.526 18:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.526 18:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:01.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:01.526 18:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.526 18:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.787 [2024-07-15 18:29:24.166219] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:01.787 [2024-07-15 18:29:24.166316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74012 ] 00:10:01.787 [2024-07-15 18:29:24.309236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.054 [2024-07-15 18:29:24.415886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.632 18:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.632 18:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:02.632 18:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:02.893 Nvme0n1 00:10:02.893 18:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:02.893 [ 00:10:02.893 { 00:10:02.893 "aliases": [ 00:10:02.893 "9090c891-a1d6-47ed-a175-6ee6fc92006d" 00:10:02.893 ], 00:10:02.893 "assigned_rate_limits": { 00:10:02.893 "r_mbytes_per_sec": 0, 00:10:02.893 "rw_ios_per_sec": 0, 00:10:02.893 "rw_mbytes_per_sec": 0, 00:10:02.893 "w_mbytes_per_sec": 0 00:10:02.893 }, 00:10:02.893 "block_size": 4096, 00:10:02.893 "claimed": false, 00:10:02.893 "driver_specific": { 00:10:02.893 "mp_policy": "active_passive", 00:10:02.893 "nvme": [ 00:10:02.893 { 00:10:02.893 "ctrlr_data": { 00:10:02.893 "ana_reporting": false, 00:10:02.893 "cntlid": 1, 00:10:02.893 "firmware_revision": "24.09", 00:10:02.893 "model_number": "SPDK bdev Controller", 00:10:02.893 "multi_ctrlr": true, 00:10:02.893 "oacs": { 00:10:02.893 "firmware": 0, 00:10:02.893 "format": 0, 00:10:02.893 "ns_manage": 0, 00:10:02.893 "security": 0 00:10:02.893 }, 00:10:02.893 "serial_number": "SPDK0", 00:10:02.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:02.893 "vendor_id": "0x8086" 00:10:02.893 }, 00:10:02.893 "ns_data": { 00:10:02.893 "can_share": true, 00:10:02.893 "id": 1 00:10:02.893 }, 00:10:02.893 "trid": { 00:10:02.893 "adrfam": "IPv4", 00:10:02.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:02.893 "traddr": "10.0.0.2", 00:10:02.893 "trsvcid": "4420", 00:10:02.893 "trtype": "TCP" 00:10:02.893 }, 00:10:02.893 "vs": { 00:10:02.893 "nvme_version": "1.3" 00:10:02.893 } 00:10:02.893 } 00:10:02.893 ] 00:10:02.893 }, 00:10:02.893 "memory_domains": [ 00:10:02.893 { 00:10:02.893 "dma_device_id": "system", 00:10:02.893 "dma_device_type": 1 00:10:02.893 } 00:10:02.893 ], 00:10:02.893 "name": "Nvme0n1", 00:10:02.893 "num_blocks": 38912, 00:10:02.893 "product_name": "NVMe disk", 00:10:02.893 "supported_io_types": { 00:10:02.893 "abort": true, 00:10:02.893 "compare": true, 00:10:02.893 "compare_and_write": true, 00:10:02.893 "copy": true, 00:10:02.893 "flush": true, 00:10:02.893 "get_zone_info": false, 00:10:02.893 "nvme_admin": true, 00:10:02.893 "nvme_io": true, 00:10:02.893 "nvme_io_md": false, 00:10:02.893 "nvme_iov_md": false, 00:10:02.893 "read": true, 00:10:02.893 "reset": true, 00:10:02.893 "seek_data": false, 00:10:02.893 "seek_hole": false, 00:10:02.893 "unmap": true, 00:10:02.893 "write": true, 00:10:02.893 "write_zeroes": true, 00:10:02.893 "zcopy": false, 00:10:02.893 "zone_append": false, 00:10:02.893 "zone_management": false 00:10:02.893 }, 00:10:02.893 "uuid": "9090c891-a1d6-47ed-a175-6ee6fc92006d", 00:10:02.893 "zoned": false 00:10:02.893 } 00:10:02.893 ] 00:10:02.893 18:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74054 00:10:02.893 18:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:02.894 18:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:03.151 Running I/O for 10 seconds... 00:10:04.120 Latency(us) 00:10:04.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:04.120 Nvme0n1 : 1.00 11874.00 46.38 0.00 0.00 0.00 0.00 0.00 00:10:04.120 =================================================================================================================== 00:10:04.120 Total : 11874.00 46.38 0.00 0.00 0.00 0.00 0.00 00:10:04.120 00:10:05.064 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:05.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.064 Nvme0n1 : 2.00 11740.00 45.86 0.00 0.00 0.00 0.00 0.00 00:10:05.064 =================================================================================================================== 00:10:05.064 Total : 11740.00 45.86 0.00 0.00 0.00 0.00 0.00 00:10:05.064 00:10:05.322 true 00:10:05.322 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:05.322 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:05.581 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:05.581 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:05.581 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74054 00:10:06.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.148 Nvme0n1 : 3.00 11627.00 45.42 0.00 0.00 0.00 0.00 0.00 00:10:06.148 =================================================================================================================== 00:10:06.148 Total : 11627.00 45.42 0.00 0.00 0.00 0.00 0.00 00:10:06.148 00:10:07.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.081 Nvme0n1 : 4.00 11508.75 44.96 0.00 0.00 0.00 0.00 0.00 00:10:07.081 =================================================================================================================== 00:10:07.081 Total : 11508.75 44.96 0.00 0.00 0.00 0.00 0.00 00:10:07.081 00:10:08.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.015 Nvme0n1 : 5.00 11404.80 44.55 0.00 0.00 0.00 0.00 0.00 00:10:08.015 =================================================================================================================== 00:10:08.015 Total : 11404.80 44.55 0.00 0.00 0.00 0.00 0.00 00:10:08.015 00:10:08.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.946 Nvme0n1 : 6.00 11073.33 43.26 0.00 0.00 0.00 0.00 0.00 00:10:08.946 =================================================================================================================== 00:10:08.946 Total : 11073.33 43.26 0.00 0.00 0.00 0.00 0.00 00:10:08.946 00:10:10.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.319 Nvme0n1 : 7.00 11067.29 43.23 0.00 0.00 0.00 0.00 0.00 00:10:10.319 =================================================================================================================== 00:10:10.319 Total : 11067.29 43.23 0.00 0.00 0.00 0.00 0.00 00:10:10.319 00:10:11.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.251 Nvme0n1 : 8.00 11070.25 43.24 0.00 0.00 0.00 0.00 0.00 00:10:11.251 =================================================================================================================== 00:10:11.251 Total : 11070.25 43.24 0.00 0.00 0.00 0.00 0.00 00:10:11.251 00:10:12.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.186 Nvme0n1 : 9.00 10941.22 42.74 0.00 0.00 0.00 0.00 0.00 00:10:12.186 =================================================================================================================== 00:10:12.186 Total : 10941.22 42.74 0.00 0.00 0.00 0.00 0.00 00:10:12.186 00:10:13.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.120 Nvme0n1 : 10.00 10892.80 42.55 0.00 0.00 0.00 0.00 0.00 00:10:13.120 =================================================================================================================== 00:10:13.120 Total : 10892.80 42.55 0.00 0.00 0.00 0.00 0.00 00:10:13.120 00:10:13.120 00:10:13.120 Latency(us) 00:10:13.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.120 Nvme0n1 : 10.01 10895.07 42.56 0.00 0.00 11741.70 3579.48 181079.39 00:10:13.120 =================================================================================================================== 00:10:13.120 Total : 10895.07 42.56 0.00 0.00 11741.70 3579.48 181079.39 00:10:13.120 0 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74012 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74012 ']' 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74012 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74012 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74012' 00:10:13.120 killing process with pid 74012 00:10:13.120 Received shutdown signal, test time was about 10.000000 seconds 00:10:13.120 00:10:13.120 Latency(us) 00:10:13.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.120 =================================================================================================================== 00:10:13.120 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74012 00:10:13.120 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74012 00:10:13.379 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:13.637 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:13.637 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:13.637 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73422 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73422 00:10:13.895 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73422 Killed "${NVMF_APP[@]}" "$@" 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74217 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74217 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74217 ']' 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.895 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:14.154 [2024-07-15 18:29:36.536529] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:14.154 [2024-07-15 18:29:36.536623] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.154 [2024-07-15 18:29:36.680924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.412 [2024-07-15 18:29:36.773461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.412 [2024-07-15 18:29:36.773515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.412 [2024-07-15 18:29:36.773524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.412 [2024-07-15 18:29:36.773533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.412 [2024-07-15 18:29:36.773539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.412 [2024-07-15 18:29:36.773578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.975 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.975 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:14.975 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.975 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:14.975 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:14.975 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.975 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:15.233 [2024-07-15 18:29:37.625586] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:15.233 [2024-07-15 18:29:37.625837] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:15.233 [2024-07-15 18:29:37.626138] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:15.233 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:15.233 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9090c891-a1d6-47ed-a175-6ee6fc92006d 00:10:15.233 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=9090c891-a1d6-47ed-a175-6ee6fc92006d 00:10:15.233 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:15.233 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:15.233 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:15.233 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:15.233 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:15.492 18:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9090c891-a1d6-47ed-a175-6ee6fc92006d -t 2000 00:10:15.492 [ 00:10:15.492 { 00:10:15.492 "aliases": [ 00:10:15.492 "lvs/lvol" 00:10:15.492 ], 00:10:15.492 "assigned_rate_limits": { 00:10:15.492 "r_mbytes_per_sec": 0, 00:10:15.492 "rw_ios_per_sec": 0, 00:10:15.492 "rw_mbytes_per_sec": 0, 00:10:15.492 "w_mbytes_per_sec": 0 00:10:15.492 }, 00:10:15.492 "block_size": 4096, 00:10:15.492 "claimed": false, 00:10:15.492 "driver_specific": { 00:10:15.492 "lvol": { 00:10:15.492 "base_bdev": "aio_bdev", 00:10:15.492 "clone": false, 00:10:15.492 "esnap_clone": false, 00:10:15.492 "lvol_store_uuid": "9e91d3d9-1bb3-4473-8f7c-fcc77bc46363", 00:10:15.492 "num_allocated_clusters": 38, 00:10:15.492 "snapshot": false, 00:10:15.492 "thin_provision": false 00:10:15.492 } 00:10:15.492 }, 00:10:15.492 "name": "9090c891-a1d6-47ed-a175-6ee6fc92006d", 00:10:15.492 "num_blocks": 38912, 00:10:15.492 "product_name": "Logical Volume", 00:10:15.492 "supported_io_types": { 00:10:15.492 "abort": false, 00:10:15.492 "compare": false, 00:10:15.492 "compare_and_write": false, 00:10:15.492 "copy": false, 00:10:15.492 "flush": false, 00:10:15.492 "get_zone_info": false, 00:10:15.492 "nvme_admin": false, 00:10:15.492 "nvme_io": false, 00:10:15.492 "nvme_io_md": false, 00:10:15.492 "nvme_iov_md": false, 00:10:15.492 "read": true, 00:10:15.492 "reset": true, 00:10:15.492 "seek_data": true, 00:10:15.492 "seek_hole": true, 00:10:15.492 "unmap": true, 00:10:15.492 "write": true, 00:10:15.492 "write_zeroes": true, 00:10:15.492 "zcopy": false, 00:10:15.492 "zone_append": false, 00:10:15.492 "zone_management": false 00:10:15.492 }, 00:10:15.492 "uuid": "9090c891-a1d6-47ed-a175-6ee6fc92006d", 00:10:15.492 "zoned": false 00:10:15.492 } 00:10:15.492 ] 00:10:15.492 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:15.492 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:15.492 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:15.750 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:15.750 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:15.750 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:16.008 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:16.008 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:16.266 [2024-07-15 18:29:38.661382] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:16.266 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:16.525 2024/07/15 18:29:38 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:9e91d3d9-1bb3-4473-8f7c-fcc77bc46363], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:16.525 request: 00:10:16.525 { 00:10:16.525 "method": "bdev_lvol_get_lvstores", 00:10:16.525 "params": { 00:10:16.525 "uuid": "9e91d3d9-1bb3-4473-8f7c-fcc77bc46363" 00:10:16.525 } 00:10:16.525 } 00:10:16.525 Got JSON-RPC error response 00:10:16.525 GoRPCClient: error on JSON-RPC call 00:10:16.525 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:16.525 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:16.525 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:16.525 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:16.525 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:16.525 aio_bdev 00:10:16.525 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9090c891-a1d6-47ed-a175-6ee6fc92006d 00:10:16.525 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=9090c891-a1d6-47ed-a175-6ee6fc92006d 00:10:16.525 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:16.525 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:16.525 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:16.525 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:16.525 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:16.783 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9090c891-a1d6-47ed-a175-6ee6fc92006d -t 2000 00:10:17.041 [ 00:10:17.041 { 00:10:17.041 "aliases": [ 00:10:17.041 "lvs/lvol" 00:10:17.041 ], 00:10:17.041 "assigned_rate_limits": { 00:10:17.041 "r_mbytes_per_sec": 0, 00:10:17.041 "rw_ios_per_sec": 0, 00:10:17.041 "rw_mbytes_per_sec": 0, 00:10:17.041 "w_mbytes_per_sec": 0 00:10:17.041 }, 00:10:17.041 "block_size": 4096, 00:10:17.041 "claimed": false, 00:10:17.041 "driver_specific": { 00:10:17.041 "lvol": { 00:10:17.041 "base_bdev": "aio_bdev", 00:10:17.041 "clone": false, 00:10:17.041 "esnap_clone": false, 00:10:17.041 "lvol_store_uuid": "9e91d3d9-1bb3-4473-8f7c-fcc77bc46363", 00:10:17.041 "num_allocated_clusters": 38, 00:10:17.041 "snapshot": false, 00:10:17.041 "thin_provision": false 00:10:17.041 } 00:10:17.041 }, 00:10:17.041 "name": "9090c891-a1d6-47ed-a175-6ee6fc92006d", 00:10:17.041 "num_blocks": 38912, 00:10:17.041 "product_name": "Logical Volume", 00:10:17.041 "supported_io_types": { 00:10:17.041 "abort": false, 00:10:17.041 "compare": false, 00:10:17.041 "compare_and_write": false, 00:10:17.041 "copy": false, 00:10:17.041 "flush": false, 00:10:17.041 "get_zone_info": false, 00:10:17.041 "nvme_admin": false, 00:10:17.041 "nvme_io": false, 00:10:17.041 "nvme_io_md": false, 00:10:17.041 "nvme_iov_md": false, 00:10:17.041 "read": true, 00:10:17.041 "reset": true, 00:10:17.041 "seek_data": true, 00:10:17.041 "seek_hole": true, 00:10:17.041 "unmap": true, 00:10:17.041 "write": true, 00:10:17.041 "write_zeroes": true, 00:10:17.041 "zcopy": false, 00:10:17.041 "zone_append": false, 00:10:17.041 "zone_management": false 00:10:17.041 }, 00:10:17.041 "uuid": "9090c891-a1d6-47ed-a175-6ee6fc92006d", 00:10:17.041 "zoned": false 00:10:17.041 } 00:10:17.041 ] 00:10:17.041 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:17.042 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:17.042 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:17.299 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:17.299 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:17.299 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:17.556 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:17.556 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9090c891-a1d6-47ed-a175-6ee6fc92006d 00:10:17.556 18:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e91d3d9-1bb3-4473-8f7c-fcc77bc46363 00:10:17.814 18:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:18.094 18:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:18.688 ************************************ 00:10:18.688 END TEST lvs_grow_dirty 00:10:18.688 ************************************ 00:10:18.688 00:10:18.688 real 0m19.047s 00:10:18.688 user 0m38.187s 00:10:18.688 sys 0m7.592s 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:18.688 nvmf_trace.0 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:18.688 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:18.947 rmmod nvme_tcp 00:10:18.947 rmmod nvme_fabrics 00:10:18.947 rmmod nvme_keyring 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74217 ']' 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74217 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74217 ']' 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74217 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74217 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:18.947 killing process with pid 74217 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74217' 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74217 00:10:18.947 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74217 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:19.206 00:10:19.206 real 0m38.619s 00:10:19.206 user 0m59.269s 00:10:19.206 sys 0m11.244s 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.206 18:29:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:19.206 ************************************ 00:10:19.206 END TEST nvmf_lvs_grow 00:10:19.206 ************************************ 00:10:19.206 18:29:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:19.206 18:29:41 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:19.206 18:29:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:19.206 18:29:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.206 18:29:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.206 ************************************ 00:10:19.206 START TEST nvmf_bdev_io_wait 00:10:19.206 ************************************ 00:10:19.206 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:19.465 * Looking for test storage... 00:10:19.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.465 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:19.466 Cannot find device "nvmf_tgt_br" 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.466 Cannot find device "nvmf_tgt_br2" 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:19.466 Cannot find device "nvmf_tgt_br" 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:19.466 Cannot find device "nvmf_tgt_br2" 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:19.466 18:29:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:19.466 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:19.466 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.466 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:19.466 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.466 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:19.466 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:19.466 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:19.466 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:19.466 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:19.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:10:19.725 00:10:19.725 --- 10.0.0.2 ping statistics --- 00:10:19.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.725 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:19.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:19.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:10:19.725 00:10:19.725 --- 10.0.0.3 ping statistics --- 00:10:19.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.725 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:19.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:10:19.725 00:10:19.725 --- 10.0.0.1 ping statistics --- 00:10:19.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.725 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74627 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74627 00:10:19.725 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74627 ']' 00:10:19.726 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.726 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.726 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.726 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.726 18:29:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:19.984 [2024-07-15 18:29:42.365129] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:19.984 [2024-07-15 18:29:42.365203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.984 [2024-07-15 18:29:42.503746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.242 [2024-07-15 18:29:42.603358] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.242 [2024-07-15 18:29:42.603413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.242 [2024-07-15 18:29:42.603423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.242 [2024-07-15 18:29:42.603431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.242 [2024-07-15 18:29:42.603439] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.242 [2024-07-15 18:29:42.603668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.242 [2024-07-15 18:29:42.603725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.242 [2024-07-15 18:29:42.604648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.242 [2024-07-15 18:29:42.604649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 [2024-07-15 18:29:43.379818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 Malloc0 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 [2024-07-15 18:29:43.442754] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74684 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74686 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:20.843 { 00:10:20.843 "params": { 00:10:20.843 "name": "Nvme$subsystem", 00:10:20.843 "trtype": "$TEST_TRANSPORT", 00:10:20.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.843 "adrfam": "ipv4", 00:10:20.843 "trsvcid": "$NVMF_PORT", 00:10:20.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.843 "hdgst": ${hdgst:-false}, 00:10:20.843 "ddgst": ${ddgst:-false} 00:10:20.843 }, 00:10:20.843 "method": "bdev_nvme_attach_controller" 00:10:20.843 } 00:10:20.843 EOF 00:10:20.843 )") 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74687 00:10:20.843 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:21.101 { 00:10:21.101 "params": { 00:10:21.101 "name": "Nvme$subsystem", 00:10:21.101 "trtype": "$TEST_TRANSPORT", 00:10:21.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.101 "adrfam": "ipv4", 00:10:21.101 "trsvcid": "$NVMF_PORT", 00:10:21.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.101 "hdgst": ${hdgst:-false}, 00:10:21.101 "ddgst": ${ddgst:-false} 00:10:21.101 }, 00:10:21.101 "method": "bdev_nvme_attach_controller" 00:10:21.101 } 00:10:21.101 EOF 00:10:21.101 )") 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:21.101 { 00:10:21.101 "params": { 00:10:21.101 "name": "Nvme$subsystem", 00:10:21.101 "trtype": "$TEST_TRANSPORT", 00:10:21.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.101 "adrfam": "ipv4", 00:10:21.101 "trsvcid": "$NVMF_PORT", 00:10:21.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.101 "hdgst": ${hdgst:-false}, 00:10:21.101 "ddgst": ${ddgst:-false} 00:10:21.101 }, 00:10:21.101 "method": "bdev_nvme_attach_controller" 00:10:21.101 } 00:10:21.101 EOF 00:10:21.101 )") 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74696 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:21.101 "params": { 00:10:21.101 "name": "Nvme1", 00:10:21.101 "trtype": "tcp", 00:10:21.101 "traddr": "10.0.0.2", 00:10:21.101 "adrfam": "ipv4", 00:10:21.101 "trsvcid": "4420", 00:10:21.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.101 "hdgst": false, 00:10:21.101 "ddgst": false 00:10:21.101 }, 00:10:21.101 "method": "bdev_nvme_attach_controller" 00:10:21.101 }' 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:21.101 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:21.102 { 00:10:21.102 "params": { 00:10:21.102 "name": "Nvme$subsystem", 00:10:21.102 "trtype": "$TEST_TRANSPORT", 00:10:21.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.102 "adrfam": "ipv4", 00:10:21.102 "trsvcid": "$NVMF_PORT", 00:10:21.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.102 "hdgst": ${hdgst:-false}, 00:10:21.102 "ddgst": ${ddgst:-false} 00:10:21.102 }, 00:10:21.102 "method": "bdev_nvme_attach_controller" 00:10:21.102 } 00:10:21.102 EOF 00:10:21.102 )") 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:21.102 "params": { 00:10:21.102 "name": "Nvme1", 00:10:21.102 "trtype": "tcp", 00:10:21.102 "traddr": "10.0.0.2", 00:10:21.102 "adrfam": "ipv4", 00:10:21.102 "trsvcid": "4420", 00:10:21.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.102 "hdgst": false, 00:10:21.102 "ddgst": false 00:10:21.102 }, 00:10:21.102 "method": "bdev_nvme_attach_controller" 00:10:21.102 }' 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:21.102 "params": { 00:10:21.102 "name": "Nvme1", 00:10:21.102 "trtype": "tcp", 00:10:21.102 "traddr": "10.0.0.2", 00:10:21.102 "adrfam": "ipv4", 00:10:21.102 "trsvcid": "4420", 00:10:21.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.102 "hdgst": false, 00:10:21.102 "ddgst": false 00:10:21.102 }, 00:10:21.102 "method": "bdev_nvme_attach_controller" 00:10:21.102 }' 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:21.102 "params": { 00:10:21.102 "name": "Nvme1", 00:10:21.102 "trtype": "tcp", 00:10:21.102 "traddr": "10.0.0.2", 00:10:21.102 "adrfam": "ipv4", 00:10:21.102 "trsvcid": "4420", 00:10:21.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.102 "hdgst": false, 00:10:21.102 "ddgst": false 00:10:21.102 }, 00:10:21.102 "method": "bdev_nvme_attach_controller" 00:10:21.102 }' 00:10:21.102 [2024-07-15 18:29:43.515209] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:21.102 [2024-07-15 18:29:43.515279] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:21.102 [2024-07-15 18:29:43.521408] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:21.102 [2024-07-15 18:29:43.521487] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:21.102 18:29:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74684 00:10:21.102 [2024-07-15 18:29:43.523417] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:21.102 [2024-07-15 18:29:43.523562] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:21.102 [2024-07-15 18:29:43.532238] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:21.102 [2024-07-15 18:29:43.532294] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:21.102 [2024-07-15 18:29:43.711446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.361 [2024-07-15 18:29:43.769810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.361 [2024-07-15 18:29:43.807483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:21.361 [2024-07-15 18:29:43.840288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.361 [2024-07-15 18:29:43.848700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:21.361 [2024-07-15 18:29:43.902120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.361 [2024-07-15 18:29:43.920924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:21.361 Running I/O for 1 seconds... 00:10:21.620 [2024-07-15 18:29:43.982780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:21.620 Running I/O for 1 seconds... 00:10:21.620 Running I/O for 1 seconds... 00:10:21.620 Running I/O for 1 seconds... 00:10:22.554 00:10:22.554 Latency(us) 00:10:22.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.554 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:22.554 Nvme1n1 : 1.01 7636.46 29.83 0.00 0.00 16664.11 8317.02 24319.38 00:10:22.554 =================================================================================================================== 00:10:22.554 Total : 7636.46 29.83 0.00 0.00 16664.11 8317.02 24319.38 00:10:22.554 00:10:22.554 Latency(us) 00:10:22.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.554 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:22.554 Nvme1n1 : 1.00 243431.73 950.91 0.00 0.00 523.57 213.85 868.55 00:10:22.554 =================================================================================================================== 00:10:22.554 Total : 243431.73 950.91 0.00 0.00 523.57 213.85 868.55 00:10:22.554 00:10:22.554 Latency(us) 00:10:22.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.554 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:22.554 Nvme1n1 : 1.01 9175.45 35.84 0.00 0.00 13885.81 5527.13 29688.60 00:10:22.554 =================================================================================================================== 00:10:22.554 Total : 9175.45 35.84 0.00 0.00 13885.81 5527.13 29688.60 00:10:22.554 00:10:22.554 Latency(us) 00:10:22.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.554 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:22.554 Nvme1n1 : 1.01 7102.31 27.74 0.00 0.00 17943.91 8422.30 24424.66 00:10:22.554 =================================================================================================================== 00:10:22.554 Total : 7102.31 27.74 0.00 0.00 17943.91 8422.30 24424.66 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74686 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74687 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74696 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:22.858 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.117 rmmod nvme_tcp 00:10:23.117 rmmod nvme_fabrics 00:10:23.117 rmmod nvme_keyring 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74627 ']' 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74627 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74627 ']' 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74627 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74627 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.117 killing process with pid 74627 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74627' 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74627 00:10:23.117 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74627 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:23.376 ************************************ 00:10:23.376 END TEST nvmf_bdev_io_wait 00:10:23.376 ************************************ 00:10:23.376 00:10:23.376 real 0m4.128s 00:10:23.376 user 0m17.539s 00:10:23.376 sys 0m2.084s 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.376 18:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:23.376 18:29:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:23.376 18:29:45 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:23.376 18:29:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:23.376 18:29:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.376 18:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:23.376 ************************************ 00:10:23.376 START TEST nvmf_queue_depth 00:10:23.376 ************************************ 00:10:23.376 18:29:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:23.635 * Looking for test storage... 00:10:23.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.635 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:23.636 Cannot find device "nvmf_tgt_br" 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:23.636 Cannot find device "nvmf_tgt_br2" 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:23.636 Cannot find device "nvmf_tgt_br" 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:23.636 Cannot find device "nvmf_tgt_br2" 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:23.636 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:23.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:23.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:23.894 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:24.152 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:24.152 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:24.152 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:24.152 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:24.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:10:24.152 00:10:24.152 --- 10.0.0.2 ping statistics --- 00:10:24.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.152 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:10:24.152 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:24.152 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:24.152 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:10:24.152 00:10:24.152 --- 10.0.0.3 ping statistics --- 00:10:24.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.152 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:10:24.152 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:24.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:10:24.153 00:10:24.153 --- 10.0.0.1 ping statistics --- 00:10:24.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.153 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=74920 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 74920 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 74920 ']' 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.153 18:29:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.153 [2024-07-15 18:29:46.660901] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:24.153 [2024-07-15 18:29:46.660974] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.411 [2024-07-15 18:29:46.804424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.411 [2024-07-15 18:29:46.903410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.411 [2024-07-15 18:29:46.903478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.411 [2024-07-15 18:29:46.903494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.411 [2024-07-15 18:29:46.903507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.411 [2024-07-15 18:29:46.903517] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.411 [2024-07-15 18:29:46.903554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.978 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.978 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:24.978 18:29:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:24.978 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.979 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.237 [2024-07-15 18:29:47.604409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.237 Malloc0 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.237 [2024-07-15 18:29:47.676440] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=74970 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 74970 /var/tmp/bdevperf.sock 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 74970 ']' 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:25.237 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:25.238 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:25.238 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.238 18:29:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.238 [2024-07-15 18:29:47.734147] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:25.238 [2024-07-15 18:29:47.734227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74970 ] 00:10:25.497 [2024-07-15 18:29:47.873832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.497 [2024-07-15 18:29:47.972183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.063 18:29:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.063 18:29:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:26.063 18:29:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:26.063 18:29:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.063 18:29:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:26.063 NVMe0n1 00:10:26.063 18:29:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.063 18:29:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:26.322 Running I/O for 10 seconds... 00:10:36.329 00:10:36.329 Latency(us) 00:10:36.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.329 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:36.329 Verification LBA range: start 0x0 length 0x4000 00:10:36.329 NVMe0n1 : 10.06 11868.35 46.36 0.00 0.00 85973.36 19266.00 58956.08 00:10:36.329 =================================================================================================================== 00:10:36.329 Total : 11868.35 46.36 0.00 0.00 85973.36 19266.00 58956.08 00:10:36.329 0 00:10:36.329 18:29:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 74970 00:10:36.329 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 74970 ']' 00:10:36.329 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 74970 00:10:36.329 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:36.329 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.329 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74970 00:10:36.329 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:36.329 killing process with pid 74970 00:10:36.329 Received shutdown signal, test time was about 10.000000 seconds 00:10:36.329 00:10:36.329 Latency(us) 00:10:36.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.329 =================================================================================================================== 00:10:36.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:36.329 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:36.329 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74970' 00:10:36.329 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 74970 00:10:36.330 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 74970 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.589 rmmod nvme_tcp 00:10:36.589 rmmod nvme_fabrics 00:10:36.589 rmmod nvme_keyring 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 74920 ']' 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 74920 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 74920 ']' 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 74920 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74920 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:36.589 killing process with pid 74920 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74920' 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 74920 00:10:36.589 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 74920 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:36.848 00:10:36.848 real 0m13.536s 00:10:36.848 user 0m22.694s 00:10:36.848 sys 0m2.413s 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.848 ************************************ 00:10:36.848 END TEST nvmf_queue_depth 00:10:36.848 ************************************ 00:10:36.848 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.107 18:29:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:37.107 18:29:59 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:37.107 18:29:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:37.107 18:29:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.107 18:29:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.107 ************************************ 00:10:37.107 START TEST nvmf_target_multipath 00:10:37.107 ************************************ 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:37.107 * Looking for test storage... 00:10:37.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.107 18:29:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:37.108 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:37.367 Cannot find device "nvmf_tgt_br" 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.367 Cannot find device "nvmf_tgt_br2" 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:37.367 Cannot find device "nvmf_tgt_br" 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:37.367 Cannot find device "nvmf_tgt_br2" 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:37.367 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:37.626 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:37.626 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:37.626 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:37.626 18:29:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:37.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:10:37.626 00:10:37.626 --- 10.0.0.2 ping statistics --- 00:10:37.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.626 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:37.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:10:37.626 00:10:37.626 --- 10.0.0.3 ping statistics --- 00:10:37.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.626 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:37.626 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:10:37.627 00:10:37.627 --- 10.0.0.1 ping statistics --- 00:10:37.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.627 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75302 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75302 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75302 ']' 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.627 18:30:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:37.627 [2024-07-15 18:30:00.230227] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:37.627 [2024-07-15 18:30:00.230296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.885 [2024-07-15 18:30:00.371766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.885 [2024-07-15 18:30:00.462958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.885 [2024-07-15 18:30:00.463024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.885 [2024-07-15 18:30:00.463035] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.885 [2024-07-15 18:30:00.463043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.885 [2024-07-15 18:30:00.463050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.885 [2024-07-15 18:30:00.463186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.885 [2024-07-15 18:30:00.463382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.885 [2024-07-15 18:30:00.464348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.885 [2024-07-15 18:30:00.464350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.821 18:30:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.821 18:30:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:10:38.821 18:30:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.821 18:30:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:38.821 18:30:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:38.821 18:30:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.821 18:30:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:38.821 [2024-07-15 18:30:01.384527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.821 18:30:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:39.082 Malloc0 00:10:39.082 18:30:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:39.342 18:30:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.601 18:30:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.861 [2024-07-15 18:30:02.274230] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.861 18:30:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:40.120 [2024-07-15 18:30:02.474047] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:40.120 18:30:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:40.120 18:30:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:40.378 18:30:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:40.378 18:30:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:40.378 18:30:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.378 18:30:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:40.378 18:30:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:42.339 18:30:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:42.339 18:30:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:42.339 18:30:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75440 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:42.596 18:30:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:42.596 [global] 00:10:42.596 thread=1 00:10:42.596 invalidate=1 00:10:42.596 rw=randrw 00:10:42.596 time_based=1 00:10:42.596 runtime=6 00:10:42.596 ioengine=libaio 00:10:42.596 direct=1 00:10:42.596 bs=4096 00:10:42.596 iodepth=128 00:10:42.596 norandommap=0 00:10:42.596 numjobs=1 00:10:42.596 00:10:42.596 verify_dump=1 00:10:42.596 verify_backlog=512 00:10:42.596 verify_state_save=0 00:10:42.596 do_verify=1 00:10:42.596 verify=crc32c-intel 00:10:42.596 [job0] 00:10:42.596 filename=/dev/nvme0n1 00:10:42.596 Could not set queue depth (nvme0n1) 00:10:42.596 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.596 fio-3.35 00:10:42.596 Starting 1 thread 00:10:43.530 18:30:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:43.788 18:30:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:45.167 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:45.167 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:45.167 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:45.167 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:45.167 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:45.427 18:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:46.366 18:30:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:46.366 18:30:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:46.366 18:30:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:46.366 18:30:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75440 00:10:48.940 00:10:48.940 job0: (groupid=0, jobs=1): err= 0: pid=75461: Mon Jul 15 18:30:11 2024 00:10:48.940 read: IOPS=14.0k, BW=54.5MiB/s (57.2MB/s)(327MiB/6003msec) 00:10:48.940 slat (usec): min=4, max=6123, avg=37.88, stdev=150.21 00:10:48.940 clat (usec): min=240, max=38908, avg=6273.68, stdev=1412.30 00:10:48.940 lat (usec): min=277, max=38923, avg=6311.56, stdev=1415.46 00:10:48.940 clat percentiles (usec): 00:10:48.940 | 1.00th=[ 3654], 5.00th=[ 4555], 10.00th=[ 5145], 20.00th=[ 5538], 00:10:48.940 | 30.00th=[ 5735], 40.00th=[ 5997], 50.00th=[ 6194], 60.00th=[ 6390], 00:10:48.940 | 70.00th=[ 6652], 80.00th=[ 6915], 90.00th=[ 7373], 95.00th=[ 8291], 00:10:48.940 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[15401], 99.95th=[37487], 00:10:48.940 | 99.99th=[38536] 00:10:48.940 bw ( KiB/s): min=10712, max=37024, per=50.63%, avg=28275.45, stdev=9386.32, samples=11 00:10:48.940 iops : min= 2678, max= 9256, avg=7068.82, stdev=2346.58, samples=11 00:10:48.940 write: IOPS=8507, BW=33.2MiB/s (34.8MB/s)(170MiB/5114msec); 0 zone resets 00:10:48.940 slat (usec): min=5, max=32461, avg=52.77, stdev=180.23 00:10:48.940 clat (usec): min=244, max=38655, avg=5388.18, stdev=1659.74 00:10:48.940 lat (usec): min=322, max=38709, avg=5440.96, stdev=1669.24 00:10:48.940 clat percentiles (usec): 00:10:48.940 | 1.00th=[ 2474], 5.00th=[ 3752], 10.00th=[ 4113], 20.00th=[ 4686], 00:10:48.940 | 30.00th=[ 4948], 40.00th=[ 5145], 50.00th=[ 5342], 60.00th=[ 5473], 00:10:48.940 | 70.00th=[ 5669], 80.00th=[ 5932], 90.00th=[ 6325], 95.00th=[ 7046], 00:10:48.940 | 99.00th=[ 9372], 99.50th=[10159], 99.90th=[36963], 99.95th=[37487], 00:10:48.940 | 99.99th=[38536] 00:10:48.940 bw ( KiB/s): min=10928, max=36654, per=83.10%, avg=28279.09, stdev=9087.72, samples=11 00:10:48.940 iops : min= 2732, max= 9163, avg=7069.64, stdev=2271.85, samples=11 00:10:48.940 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.04% 00:10:48.940 lat (msec) : 2=0.22%, 4=3.76%, 10=95.20%, 20=0.65%, 50=0.10% 00:10:48.940 cpu : usr=7.50%, sys=33.20%, ctx=10614, majf=0, minf=133 00:10:48.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:48.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.940 issued rwts: total=83814,43506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.940 00:10:48.940 Run status group 0 (all jobs): 00:10:48.940 READ: bw=54.5MiB/s (57.2MB/s), 54.5MiB/s-54.5MiB/s (57.2MB/s-57.2MB/s), io=327MiB (343MB), run=6003-6003msec 00:10:48.940 WRITE: bw=33.2MiB/s (34.8MB/s), 33.2MiB/s-33.2MiB/s (34.8MB/s-34.8MB/s), io=170MiB (178MB), run=5114-5114msec 00:10:48.940 00:10:48.940 Disk stats (read/write): 00:10:48.940 nvme0n1: ios=82699/42604, merge=0/0, ticks=449943/187724, in_queue=637667, util=98.65% 00:10:48.940 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:48.940 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:49.199 18:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:50.578 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:50.578 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:50.578 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:50.578 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:50.578 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75589 00:10:50.578 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:50.578 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:50.578 [global] 00:10:50.578 thread=1 00:10:50.578 invalidate=1 00:10:50.578 rw=randrw 00:10:50.578 time_based=1 00:10:50.578 runtime=6 00:10:50.578 ioengine=libaio 00:10:50.578 direct=1 00:10:50.578 bs=4096 00:10:50.578 iodepth=128 00:10:50.578 norandommap=0 00:10:50.578 numjobs=1 00:10:50.578 00:10:50.578 verify_dump=1 00:10:50.578 verify_backlog=512 00:10:50.578 verify_state_save=0 00:10:50.578 do_verify=1 00:10:50.578 verify=crc32c-intel 00:10:50.578 [job0] 00:10:50.578 filename=/dev/nvme0n1 00:10:50.578 Could not set queue depth (nvme0n1) 00:10:50.578 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.578 fio-3.35 00:10:50.578 Starting 1 thread 00:10:51.160 18:30:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:51.418 18:30:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:51.675 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:52.626 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:52.626 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:52.626 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:52.626 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:52.897 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:53.156 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:54.089 18:30:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:54.089 18:30:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:54.089 18:30:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:54.089 18:30:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75589 00:10:56.620 00:10:56.620 job0: (groupid=0, jobs=1): err= 0: pid=75616: Mon Jul 15 18:30:19 2024 00:10:56.620 read: IOPS=15.1k, BW=59.1MiB/s (61.9MB/s)(355MiB/6003msec) 00:10:56.620 slat (usec): min=3, max=4856, avg=32.08, stdev=131.51 00:10:56.620 clat (usec): min=188, max=49967, avg=5856.07, stdev=1297.40 00:10:56.620 lat (usec): min=200, max=49986, avg=5888.15, stdev=1304.18 00:10:56.620 clat percentiles (usec): 00:10:56.620 | 1.00th=[ 2704], 5.00th=[ 3916], 10.00th=[ 4293], 20.00th=[ 5014], 00:10:56.620 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5866], 60.00th=[ 6128], 00:10:56.620 | 70.00th=[ 6390], 80.00th=[ 6652], 90.00th=[ 7111], 95.00th=[ 7832], 00:10:56.620 | 99.00th=[ 9503], 99.50th=[ 9896], 99.90th=[11600], 99.95th=[11994], 00:10:56.620 | 99.99th=[13435] 00:10:56.620 bw ( KiB/s): min=15320, max=46376, per=51.69%, avg=31262.00, stdev=10979.85, samples=11 00:10:56.620 iops : min= 3830, max=11594, avg=7815.45, stdev=2744.96, samples=11 00:10:56.620 write: IOPS=9226, BW=36.0MiB/s (37.8MB/s)(184MiB/5099msec); 0 zone resets 00:10:56.620 slat (usec): min=11, max=1179, avg=45.69, stdev=76.59 00:10:56.620 clat (usec): min=247, max=12192, avg=4872.53, stdev=1199.07 00:10:56.620 lat (usec): min=323, max=12217, avg=4918.22, stdev=1206.29 00:10:56.620 clat percentiles (usec): 00:10:56.620 | 1.00th=[ 2343], 5.00th=[ 2999], 10.00th=[ 3359], 20.00th=[ 3818], 00:10:56.620 | 30.00th=[ 4228], 40.00th=[ 4686], 50.00th=[ 4948], 60.00th=[ 5211], 00:10:56.620 | 70.00th=[ 5407], 80.00th=[ 5669], 90.00th=[ 6063], 95.00th=[ 6652], 00:10:56.620 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[10159], 99.95th=[10421], 00:10:56.620 | 99.99th=[11600] 00:10:56.620 bw ( KiB/s): min=15848, max=46816, per=85.00%, avg=31370.91, stdev=10572.44, samples=11 00:10:56.620 iops : min= 3962, max=11704, avg=7842.73, stdev=2643.11, samples=11 00:10:56.620 lat (usec) : 250=0.01%, 500=0.03%, 750=0.04%, 1000=0.05% 00:10:56.620 lat (msec) : 2=0.35%, 4=11.78%, 10=87.41%, 20=0.34%, 50=0.01% 00:10:56.621 cpu : usr=7.46%, sys=34.09%, ctx=11710, majf=0, minf=141 00:10:56.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:56.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.621 issued rwts: total=90763,47046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.621 00:10:56.621 Run status group 0 (all jobs): 00:10:56.621 READ: bw=59.1MiB/s (61.9MB/s), 59.1MiB/s-59.1MiB/s (61.9MB/s-61.9MB/s), io=355MiB (372MB), run=6003-6003msec 00:10:56.621 WRITE: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=184MiB (193MB), run=5099-5099msec 00:10:56.621 00:10:56.621 Disk stats (read/write): 00:10:56.621 nvme0n1: ios=89659/46146, merge=0/0, ticks=458040/185059, in_queue=643099, util=98.68% 00:10:56.621 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:56.621 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.621 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:56.621 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:56.621 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.621 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.621 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:56.621 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:56.621 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.879 rmmod nvme_tcp 00:10:56.879 rmmod nvme_fabrics 00:10:56.879 rmmod nvme_keyring 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75302 ']' 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75302 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75302 ']' 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75302 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:56.879 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75302 00:10:57.152 killing process with pid 75302 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75302' 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75302 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75302 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.152 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.411 18:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:57.411 00:10:57.411 real 0m20.250s 00:10:57.411 user 1m17.289s 00:10:57.411 sys 0m8.658s 00:10:57.411 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.411 18:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:57.411 ************************************ 00:10:57.411 END TEST nvmf_target_multipath 00:10:57.411 ************************************ 00:10:57.411 18:30:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:57.411 18:30:19 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:57.411 18:30:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:57.411 18:30:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.411 18:30:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:57.411 ************************************ 00:10:57.411 START TEST nvmf_zcopy 00:10:57.411 ************************************ 00:10:57.411 18:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:57.411 * Looking for test storage... 00:10:57.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:57.411 18:30:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:57.411 18:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:57.411 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.411 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.411 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.411 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.411 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.411 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.411 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.411 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.411 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.412 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:57.670 Cannot find device "nvmf_tgt_br" 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.670 Cannot find device "nvmf_tgt_br2" 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:57.670 Cannot find device "nvmf_tgt_br" 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:57.670 Cannot find device "nvmf_tgt_br2" 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:57.670 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:57.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:10:57.928 00:10:57.928 --- 10.0.0.2 ping statistics --- 00:10:57.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.928 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:57.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:57.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:10:57.928 00:10:57.928 --- 10.0.0.3 ping statistics --- 00:10:57.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.928 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:57.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:10:57.928 00:10:57.928 --- 10.0.0.1 ping statistics --- 00:10:57.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.928 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:57.928 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:58.186 18:30:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:58.186 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:58.186 18:30:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:58.186 18:30:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:58.186 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=75893 00:10:58.186 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:58.186 18:30:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 75893 00:10:58.186 18:30:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 75893 ']' 00:10:58.186 18:30:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.187 18:30:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.187 18:30:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.187 18:30:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.187 18:30:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:58.187 [2024-07-15 18:30:20.608262] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:58.187 [2024-07-15 18:30:20.608341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.187 [2024-07-15 18:30:20.753371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.445 [2024-07-15 18:30:20.856947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.445 [2024-07-15 18:30:20.856996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.445 [2024-07-15 18:30:20.857005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.445 [2024-07-15 18:30:20.857013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.445 [2024-07-15 18:30:20.857020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.445 [2024-07-15 18:30:20.857048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.054 [2024-07-15 18:30:21.521917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.054 [2024-07-15 18:30:21.545967] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.054 malloc0 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:59.054 18:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:59.054 { 00:10:59.054 "params": { 00:10:59.054 "name": "Nvme$subsystem", 00:10:59.054 "trtype": "$TEST_TRANSPORT", 00:10:59.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.054 "adrfam": "ipv4", 00:10:59.054 "trsvcid": "$NVMF_PORT", 00:10:59.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.054 "hdgst": ${hdgst:-false}, 00:10:59.055 "ddgst": ${ddgst:-false} 00:10:59.055 }, 00:10:59.055 "method": "bdev_nvme_attach_controller" 00:10:59.055 } 00:10:59.055 EOF 00:10:59.055 )") 00:10:59.055 18:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:59.055 18:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:59.055 18:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:59.055 18:30:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:59.055 "params": { 00:10:59.055 "name": "Nvme1", 00:10:59.055 "trtype": "tcp", 00:10:59.055 "traddr": "10.0.0.2", 00:10:59.055 "adrfam": "ipv4", 00:10:59.055 "trsvcid": "4420", 00:10:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:59.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:59.055 "hdgst": false, 00:10:59.055 "ddgst": false 00:10:59.055 }, 00:10:59.055 "method": "bdev_nvme_attach_controller" 00:10:59.055 }' 00:10:59.055 [2024-07-15 18:30:21.643327] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:10:59.055 [2024-07-15 18:30:21.643397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75943 ] 00:10:59.312 [2024-07-15 18:30:21.785371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.312 [2024-07-15 18:30:21.884965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.570 Running I/O for 10 seconds... 00:11:09.635 00:11:09.635 Latency(us) 00:11:09.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.635 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:09.635 Verification LBA range: start 0x0 length 0x1000 00:11:09.635 Nvme1n1 : 10.01 7705.28 60.20 0.00 0.00 16567.91 1309.40 23056.04 00:11:09.635 =================================================================================================================== 00:11:09.635 Total : 7705.28 60.20 0.00 0.00 16567.91 1309.40 23056.04 00:11:09.635 18:30:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76061 00:11:09.635 18:30:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:09.635 18:30:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.635 18:30:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:09.635 18:30:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:09.635 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:09.635 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:09.635 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:09.635 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:09.635 { 00:11:09.635 "params": { 00:11:09.636 "name": "Nvme$subsystem", 00:11:09.636 "trtype": "$TEST_TRANSPORT", 00:11:09.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.636 "adrfam": "ipv4", 00:11:09.636 "trsvcid": "$NVMF_PORT", 00:11:09.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.636 "hdgst": ${hdgst:-false}, 00:11:09.636 "ddgst": ${ddgst:-false} 00:11:09.636 }, 00:11:09.636 "method": "bdev_nvme_attach_controller" 00:11:09.636 } 00:11:09.636 EOF 00:11:09.636 )") 00:11:09.636 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:09.636 [2024-07-15 18:30:32.233888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.636 [2024-07-15 18:30:32.233926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.636 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.636 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:09.636 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:09.636 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:09.636 "params": { 00:11:09.636 "name": "Nvme1", 00:11:09.636 "trtype": "tcp", 00:11:09.636 "traddr": "10.0.0.2", 00:11:09.636 "adrfam": "ipv4", 00:11:09.636 "trsvcid": "4420", 00:11:09.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.636 "hdgst": false, 00:11:09.636 "ddgst": false 00:11:09.636 }, 00:11:09.636 "method": "bdev_nvme_attach_controller" 00:11:09.636 }' 00:11:09.894 [2024-07-15 18:30:32.249831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.894 [2024-07-15 18:30:32.249854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.894 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.894 [2024-07-15 18:30:32.261813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.894 [2024-07-15 18:30:32.261837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.894 [2024-07-15 18:30:32.263422] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:11:09.894 [2024-07-15 18:30:32.263488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76061 ] 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.273796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.273820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.285780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.285804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.301751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.301775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.313737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.313762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.325721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.325745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.337703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.337726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.349686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.349707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.361668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.361691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.373652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.373675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.385637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.385659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.397637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.397660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.403514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.895 [2024-07-15 18:30:32.409636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.409665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.425651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.425683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.437641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.437661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.449633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.449654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.461634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.461661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.473636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.473655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.485634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.485657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:09.895 [2024-07-15 18:30:32.497634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.895 [2024-07-15 18:30:32.497656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.895 [2024-07-15 18:30:32.501332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.895 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.509633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.509657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.521640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.521670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.533616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.533641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.545594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.545621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.557589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.557621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.569556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.569588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.581535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.581558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.593558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.593598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.605518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.605547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.617509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.617538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.629490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.629520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.641469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.641496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.653468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.653501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 Running I/O for 5 seconds... 00:11:10.156 [2024-07-15 18:30:32.665433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.665454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.685356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.685395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.700069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.700110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.714159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.714199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.729143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.729183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.744584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.744625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.156 [2024-07-15 18:30:32.759683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.156 [2024-07-15 18:30:32.759725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.156 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.416 [2024-07-15 18:30:32.775004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.416 [2024-07-15 18:30:32.775046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.789334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.789374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.803660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.803701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.819092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.819132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.833469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.833506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.844688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.844724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.862849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.862888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.880783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.880819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.895855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.895894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.909773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.909811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.923943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.923981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.938902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.938941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.954310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.954350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.972142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.972183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:32.986662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:32.986700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:33.001292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:33.001332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.417 [2024-07-15 18:30:33.015529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.417 [2024-07-15 18:30:33.015577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.417 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.029950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.029990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.044207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.044247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.054966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.055005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.069526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.069563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.083711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.083747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.094402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.094438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.109128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.109163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.122905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.122942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.137641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.137673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.152607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.152643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.167377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.167415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.178601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.178636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.193216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.193252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.207577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.207612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.223332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.223370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.237932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.237968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.248794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.248829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.263381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.263419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.274010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.274052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.678 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.678 [2024-07-15 18:30:33.288779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.678 [2024-07-15 18:30:33.288815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.303314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.303350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.314234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.314270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.329484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.329521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.343902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.343938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.358661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.358698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.374010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.374050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.388580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.388615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.402154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.402189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.416921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.416956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.428040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.428075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.442683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.442718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.456821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.456855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.471090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.471123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.485383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.485420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.497024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.497058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.938 [2024-07-15 18:30:33.511886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.938 [2024-07-15 18:30:33.511923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.938 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.939 [2024-07-15 18:30:33.531180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.939 [2024-07-15 18:30:33.531224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.939 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:10.939 [2024-07-15 18:30:33.545779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.939 [2024-07-15 18:30:33.545832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.939 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.556755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.556794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.572021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.572063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.587548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.587595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.601817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.601853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.612691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.612726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.627488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.627525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.641136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.641171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.655537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.655588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.673188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.673228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.688737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.688773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.699623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.699656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.714900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.714939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.730527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.730560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.745346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.745382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.761334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.761369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.775625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.775661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.789837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.789873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.198 [2024-07-15 18:30:33.804472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.198 [2024-07-15 18:30:33.804506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.198 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.820101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.820134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.834484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.834519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.848392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.848428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.863634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.863681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.878741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.878778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.894176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.894208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.910285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.910317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.924996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.925027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.940517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.940552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.954790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.954826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.972906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.972939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:33.987512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:33.987542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:34.001665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:34.001698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:34.018981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:34.019013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:34.036945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:34.036978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:34.054518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:34.054552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.459 [2024-07-15 18:30:34.068933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.459 [2024-07-15 18:30:34.068966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.459 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.718 [2024-07-15 18:30:34.082821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.718 [2024-07-15 18:30:34.082854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.097599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.097640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.112936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.112971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.127553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.127598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.138484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.138517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.153136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.153169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.164286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.164328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.178845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.178885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.193009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.193043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.207466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.207500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.223724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.223756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.239718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.239752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.253993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.254024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.268213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.268246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.283679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.283710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.298265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.298315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.312219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.312262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.719 [2024-07-15 18:30:34.326869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.719 [2024-07-15 18:30:34.326904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.719 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.978 [2024-07-15 18:30:34.337673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.978 [2024-07-15 18:30:34.337705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.978 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.978 [2024-07-15 18:30:34.354840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.978 [2024-07-15 18:30:34.354874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.978 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.978 [2024-07-15 18:30:34.369130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.978 [2024-07-15 18:30:34.369164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.978 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.978 [2024-07-15 18:30:34.383240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.978 [2024-07-15 18:30:34.383274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.978 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.978 [2024-07-15 18:30:34.397348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.978 [2024-07-15 18:30:34.397382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.978 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.978 [2024-07-15 18:30:34.412199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.978 [2024-07-15 18:30:34.412233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.978 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.978 [2024-07-15 18:30:34.427748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.978 [2024-07-15 18:30:34.427781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.978 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.978 [2024-07-15 18:30:34.442557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.978 [2024-07-15 18:30:34.442600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.979 [2024-07-15 18:30:34.457688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.979 [2024-07-15 18:30:34.457721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.979 [2024-07-15 18:30:34.472703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.979 [2024-07-15 18:30:34.472736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.979 [2024-07-15 18:30:34.489055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.979 [2024-07-15 18:30:34.489084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.979 [2024-07-15 18:30:34.499989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.979 [2024-07-15 18:30:34.500019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.979 [2024-07-15 18:30:34.515145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.979 [2024-07-15 18:30:34.515180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.979 [2024-07-15 18:30:34.526186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.979 [2024-07-15 18:30:34.526218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.979 [2024-07-15 18:30:34.540922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.979 [2024-07-15 18:30:34.540955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.979 [2024-07-15 18:30:34.556448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.979 [2024-07-15 18:30:34.556482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.979 [2024-07-15 18:30:34.570523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.979 [2024-07-15 18:30:34.570556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:11.979 [2024-07-15 18:30:34.584822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.979 [2024-07-15 18:30:34.584869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.979 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.595835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.595870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.610398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.610432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.624438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.624472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.635042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.635077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.649400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.649435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.663537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.663584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.677813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.677849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.688808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.688844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.703493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.703533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.717834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.717870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.728363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.728397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.743105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.743141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.753862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.753909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.237 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.237 [2024-07-15 18:30:34.768925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.237 [2024-07-15 18:30:34.768968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.238 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.238 [2024-07-15 18:30:34.779803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.238 [2024-07-15 18:30:34.779844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.238 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.238 [2024-07-15 18:30:34.795145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.238 [2024-07-15 18:30:34.795191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.238 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.238 [2024-07-15 18:30:34.810739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.238 [2024-07-15 18:30:34.810781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.238 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.238 [2024-07-15 18:30:34.825112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.238 [2024-07-15 18:30:34.825147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.238 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.238 [2024-07-15 18:30:34.839231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.238 [2024-07-15 18:30:34.839268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.238 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:34.850161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:34.850197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:34.864614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:34.864649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:34.878836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:34.878869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:34.893262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:34.893293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:34.907623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:34.907656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:34.921733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:34.921779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:34.939424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:34.939467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:34.954636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:34.954671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:34.969825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:34.969857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:34.987470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:34.987505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:35.002458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:35.002492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:35.018199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:35.018231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:35.032603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:35.032633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:35.047021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:35.047050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:35.061231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:35.061260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:35.071802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:35.071830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:35.086477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:35.086508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.497 [2024-07-15 18:30:35.100870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.497 [2024-07-15 18:30:35.100904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.497 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.116647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.116681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.131532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.131576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.150663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.150701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.165794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.165827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.176978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.177010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.191999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.192033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.208076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.208109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.226054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.226104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.241110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.241143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.257023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.257058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.756 [2024-07-15 18:30:35.274405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.756 [2024-07-15 18:30:35.274440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.756 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.757 [2024-07-15 18:30:35.288977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.757 [2024-07-15 18:30:35.289010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.757 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.757 [2024-07-15 18:30:35.303254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.757 [2024-07-15 18:30:35.303291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.757 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.757 [2024-07-15 18:30:35.313879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.757 [2024-07-15 18:30:35.313910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.757 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.757 [2024-07-15 18:30:35.328559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.757 [2024-07-15 18:30:35.328605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.757 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.757 [2024-07-15 18:30:35.341997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.757 [2024-07-15 18:30:35.342029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.757 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:12.757 [2024-07-15 18:30:35.356587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.757 [2024-07-15 18:30:35.356625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.757 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.370727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.370764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.381519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.381551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.396323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.396356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.407339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.407373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.422268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.422300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.433193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.433225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.448022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.448054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.462203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.462237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.477164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.477199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.492452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.492485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.507064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.507100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.517484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.517510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.532192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.532223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.546308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.546341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.557161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.557193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.571866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.571899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.585793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.585834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.599907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.599944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.614327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.614362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.015 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.015 [2024-07-15 18:30:35.625235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.015 [2024-07-15 18:30:35.625268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.640046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.640079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.651273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.651309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.665877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.665910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.679355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.679393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.693943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.693976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.707706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.707746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.722339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.722374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.733532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.733574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.748034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.748066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.762121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.762152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.776028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.776069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.790526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.790581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.801811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.801850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.816668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.816708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.831175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.831213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.844515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.844549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.859301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.859338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.870370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.870403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.274 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.274 [2024-07-15 18:30:35.884958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.274 [2024-07-15 18:30:35.884990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:35.899410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:35.899444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:35.913600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:35.913633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:35.927916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:35.927958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:35.942452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:35.942491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:35.958406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:35.958442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:35.972513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:35.972549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:35.986945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:35.986982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:35.997815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:35.997847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:36.013111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:36.013146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:36.028561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:36.028607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:36.043356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:36.043391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:36.058699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:36.058733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:36.073268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:36.073303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:36.084225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:36.084259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:36.099316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:36.099349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:36.114608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:36.114633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:36.129340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:36.129371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.532 [2024-07-15 18:30:36.140101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.532 [2024-07-15 18:30:36.140130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.532 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.790 [2024-07-15 18:30:36.154582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.790 [2024-07-15 18:30:36.154614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.168710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.168741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.179524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.179556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.197266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.197301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.211989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.212024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.222861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.222894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.237910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.237945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.249031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.249068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.263884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.263921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.279036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.279075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.294122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.294157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.310082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.310115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.324439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.324474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.338912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.338946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.349759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.349791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.365141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.365176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.379936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.379974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.791 [2024-07-15 18:30:36.394610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.791 [2024-07-15 18:30:36.394645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.791 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.410039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.410081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.425178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.425213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.440617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.440650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.455135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.455172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.469679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.469711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.485127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.485161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.499669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.499702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.510558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.510600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.524961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.524995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.539013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.539047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.553419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.553449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.564604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.564635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.579442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.579476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.050 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.050 [2024-07-15 18:30:36.594752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.050 [2024-07-15 18:30:36.594786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.051 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.051 [2024-07-15 18:30:36.609262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.051 [2024-07-15 18:30:36.609295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.051 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.051 [2024-07-15 18:30:36.623560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.051 [2024-07-15 18:30:36.623606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.051 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.051 [2024-07-15 18:30:36.637794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.051 [2024-07-15 18:30:36.637825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.051 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.051 [2024-07-15 18:30:36.648595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.051 [2024-07-15 18:30:36.648625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.051 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.663540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.663584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.674631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.674665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.689226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.689258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.703484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.703517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.717513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.717546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.731867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.731901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.747134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.747170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.761754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.761786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.776851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.776884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.791286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.791322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.801982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.802014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.816812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.816846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.827870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.827903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.843022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.843055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.309 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.309 [2024-07-15 18:30:36.858763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.309 [2024-07-15 18:30:36.858798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.310 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.310 [2024-07-15 18:30:36.873269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.310 [2024-07-15 18:30:36.873303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.310 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.310 [2024-07-15 18:30:36.888594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.310 [2024-07-15 18:30:36.888626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.310 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.310 [2024-07-15 18:30:36.903035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.310 [2024-07-15 18:30:36.903072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.310 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.310 [2024-07-15 18:30:36.917174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.310 [2024-07-15 18:30:36.917205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.310 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:36.932029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:36.932063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:36.951362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:36.951402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:36.966409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:36.966445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:36.981816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:36.981851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:36.996679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:36.996715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:37.012376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:37.012413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:37.026736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:37.026774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:37.040901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:37.040933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:37.055577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:37.055612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:37.070918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:37.070952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:37.085189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:37.085222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.569 [2024-07-15 18:30:37.095953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.569 [2024-07-15 18:30:37.095986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.569 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.570 [2024-07-15 18:30:37.111188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.570 [2024-07-15 18:30:37.111221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.570 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.570 [2024-07-15 18:30:37.126286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.570 [2024-07-15 18:30:37.126318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.570 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.570 [2024-07-15 18:30:37.141045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.570 [2024-07-15 18:30:37.141077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.570 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.570 [2024-07-15 18:30:37.152016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.570 [2024-07-15 18:30:37.152050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.570 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.570 [2024-07-15 18:30:37.166702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.570 [2024-07-15 18:30:37.166734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.570 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.570 [2024-07-15 18:30:37.180741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.570 [2024-07-15 18:30:37.180773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.829 [2024-07-15 18:30:37.194960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.829 [2024-07-15 18:30:37.194994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.829 [2024-07-15 18:30:37.209102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.829 [2024-07-15 18:30:37.209135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.829 [2024-07-15 18:30:37.223785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.829 [2024-07-15 18:30:37.223813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.829 [2024-07-15 18:30:37.239056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.829 [2024-07-15 18:30:37.239089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.829 [2024-07-15 18:30:37.253986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.829 [2024-07-15 18:30:37.254016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.829 [2024-07-15 18:30:37.264979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.829 [2024-07-15 18:30:37.265009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.829 [2024-07-15 18:30:37.279739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.829 [2024-07-15 18:30:37.279774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.829 [2024-07-15 18:30:37.293887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.829 [2024-07-15 18:30:37.293919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.829 [2024-07-15 18:30:37.308105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.829 [2024-07-15 18:30:37.308139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.829 [2024-07-15 18:30:37.322461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.829 [2024-07-15 18:30:37.322498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.829 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.830 [2024-07-15 18:30:37.336483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.830 [2024-07-15 18:30:37.336518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.830 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.830 [2024-07-15 18:30:37.351317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.830 [2024-07-15 18:30:37.351353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.830 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.830 [2024-07-15 18:30:37.366453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.830 [2024-07-15 18:30:37.366487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.830 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.830 [2024-07-15 18:30:37.381194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.830 [2024-07-15 18:30:37.381229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.830 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.830 [2024-07-15 18:30:37.396594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.830 [2024-07-15 18:30:37.396627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.830 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.830 [2024-07-15 18:30:37.411161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.830 [2024-07-15 18:30:37.411196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.830 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.830 [2024-07-15 18:30:37.422003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.830 [2024-07-15 18:30:37.422036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.830 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:14.830 [2024-07-15 18:30:37.437028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.830 [2024-07-15 18:30:37.437061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.830 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.452432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.452465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.470758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.470792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.485209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.485242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.499310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.499346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.513895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.513926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.533063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.533100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.547819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.547855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.558715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.558748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.573747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.573777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.584988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.585017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.599898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.599929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.089 [2024-07-15 18:30:37.611219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.089 [2024-07-15 18:30:37.611252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.089 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.090 [2024-07-15 18:30:37.625988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.090 [2024-07-15 18:30:37.626022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.090 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.090 [2024-07-15 18:30:37.640019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.090 [2024-07-15 18:30:37.640053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.090 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.090 [2024-07-15 18:30:37.654615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.090 [2024-07-15 18:30:37.654648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.090 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.090 00:11:15.090 Latency(us) 00:11:15.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.090 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:15.090 Nvme1n1 : 5.01 16182.50 126.43 0.00 0.00 7902.47 3658.44 18739.61 00:11:15.090 =================================================================================================================== 00:11:15.090 Total : 16182.50 126.43 0.00 0.00 7902.47 3658.44 18739.61 00:11:15.090 [2024-07-15 18:30:37.666474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.090 [2024-07-15 18:30:37.666504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.090 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.090 [2024-07-15 18:30:37.678447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.090 [2024-07-15 18:30:37.678475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.090 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.090 [2024-07-15 18:30:37.690436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.090 [2024-07-15 18:30:37.690466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.090 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.365 [2024-07-15 18:30:37.702411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.365 [2024-07-15 18:30:37.702439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.365 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.365 [2024-07-15 18:30:37.714392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.365 [2024-07-15 18:30:37.714419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.365 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.365 [2024-07-15 18:30:37.726375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.365 [2024-07-15 18:30:37.726403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 [2024-07-15 18:30:37.738361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.366 [2024-07-15 18:30:37.738393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 [2024-07-15 18:30:37.750344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.366 [2024-07-15 18:30:37.750375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 [2024-07-15 18:30:37.762324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.366 [2024-07-15 18:30:37.762351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 [2024-07-15 18:30:37.774304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.366 [2024-07-15 18:30:37.774327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 [2024-07-15 18:30:37.786289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.366 [2024-07-15 18:30:37.786313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 [2024-07-15 18:30:37.798271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.366 [2024-07-15 18:30:37.798297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 [2024-07-15 18:30:37.810251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.366 [2024-07-15 18:30:37.810273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 [2024-07-15 18:30:37.822235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.366 [2024-07-15 18:30:37.822260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 [2024-07-15 18:30:37.834217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.366 [2024-07-15 18:30:37.834241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 [2024-07-15 18:30:37.846201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.366 [2024-07-15 18:30:37.846222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.366 2024/07/15 18:30:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.366 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76061) - No such process 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76061 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.366 delay0 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.366 18:30:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:15.624 [2024-07-15 18:30:38.059141] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:22.176 Initializing NVMe Controllers 00:11:22.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:22.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:22.176 Initialization complete. Launching workers. 00:11:22.176 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 168 00:11:22.176 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 455, failed to submit 33 00:11:22.176 success 290, unsuccess 165, failed 0 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:22.176 rmmod nvme_tcp 00:11:22.176 rmmod nvme_fabrics 00:11:22.176 rmmod nvme_keyring 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 75893 ']' 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 75893 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 75893 ']' 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 75893 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75893 00:11:22.176 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:22.177 killing process with pid 75893 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75893' 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 75893 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 75893 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:22.177 00:11:22.177 real 0m24.682s 00:11:22.177 user 0m39.678s 00:11:22.177 sys 0m7.838s 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.177 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:22.177 ************************************ 00:11:22.177 END TEST nvmf_zcopy 00:11:22.177 ************************************ 00:11:22.177 18:30:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:22.177 18:30:44 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:22.177 18:30:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:22.177 18:30:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.177 18:30:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:22.177 ************************************ 00:11:22.177 START TEST nvmf_nmic 00:11:22.177 ************************************ 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:22.177 * Looking for test storage... 00:11:22.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.177 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:22.436 Cannot find device "nvmf_tgt_br" 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:22.436 Cannot find device "nvmf_tgt_br2" 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:22.436 Cannot find device "nvmf_tgt_br" 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:22.436 Cannot find device "nvmf_tgt_br2" 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:22.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:22.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:22.436 18:30:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:22.436 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:22.436 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:22.436 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:22.436 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:22.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:22.694 00:11:22.694 --- 10.0.0.2 ping statistics --- 00:11:22.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.694 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:22.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:22.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:11:22.694 00:11:22.694 --- 10.0.0.3 ping statistics --- 00:11:22.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.694 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:22.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:11:22.694 00:11:22.694 --- 10.0.0.1 ping statistics --- 00:11:22.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.694 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76387 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76387 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76387 ']' 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.694 18:30:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.695 18:30:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.695 18:30:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.695 18:30:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:22.695 [2024-07-15 18:30:45.266037] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:11:22.695 [2024-07-15 18:30:45.266531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.953 [2024-07-15 18:30:45.407748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.953 [2024-07-15 18:30:45.494144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.953 [2024-07-15 18:30:45.494199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.953 [2024-07-15 18:30:45.494209] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.953 [2024-07-15 18:30:45.494217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.953 [2024-07-15 18:30:45.494224] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.953 [2024-07-15 18:30:45.494429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.953 [2024-07-15 18:30:45.494638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.953 [2024-07-15 18:30:45.495435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.953 [2024-07-15 18:30:45.495436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.538 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:23.538 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:11:23.538 18:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:23.538 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:23.538 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.797 [2024-07-15 18:30:46.170256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.797 Malloc0 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.797 [2024-07-15 18:30:46.247209] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.797 test case1: single bdev can't be used in multiple subsystems 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.797 [2024-07-15 18:30:46.283022] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:23.797 [2024-07-15 18:30:46.283060] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:23.797 [2024-07-15 18:30:46.283070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.797 2024/07/15 18:30:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:23.797 request: 00:11:23.797 { 00:11:23.797 "method": "nvmf_subsystem_add_ns", 00:11:23.797 "params": { 00:11:23.797 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:23.797 "namespace": { 00:11:23.797 "bdev_name": "Malloc0", 00:11:23.797 "no_auto_visible": false 00:11:23.797 } 00:11:23.797 } 00:11:23.797 } 00:11:23.797 Got JSON-RPC error response 00:11:23.797 GoRPCClient: error on JSON-RPC call 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:23.797 Adding namespace failed - expected result. 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:23.797 test case2: host connect to nvmf target in multiple paths 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:23.797 [2024-07-15 18:30:46.303124] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.797 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.057 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:24.057 18:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.057 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:24.057 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.057 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:24.057 18:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:26.588 18:30:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:26.588 18:30:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:26.588 18:30:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.588 18:30:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:26.588 18:30:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.588 18:30:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:26.588 18:30:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:26.588 [global] 00:11:26.588 thread=1 00:11:26.588 invalidate=1 00:11:26.588 rw=write 00:11:26.588 time_based=1 00:11:26.588 runtime=1 00:11:26.588 ioengine=libaio 00:11:26.588 direct=1 00:11:26.588 bs=4096 00:11:26.588 iodepth=1 00:11:26.588 norandommap=0 00:11:26.588 numjobs=1 00:11:26.588 00:11:26.588 verify_dump=1 00:11:26.588 verify_backlog=512 00:11:26.588 verify_state_save=0 00:11:26.588 do_verify=1 00:11:26.588 verify=crc32c-intel 00:11:26.588 [job0] 00:11:26.588 filename=/dev/nvme0n1 00:11:26.588 Could not set queue depth (nvme0n1) 00:11:26.588 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.588 fio-3.35 00:11:26.588 Starting 1 thread 00:11:27.519 00:11:27.519 job0: (groupid=0, jobs=1): err= 0: pid=76491: Mon Jul 15 18:30:49 2024 00:11:27.519 read: IOPS=4741, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1001msec) 00:11:27.519 slat (nsec): min=7895, max=27928, avg=8464.26, stdev=1097.96 00:11:27.519 clat (usec): min=88, max=270, avg=104.40, stdev= 7.60 00:11:27.519 lat (usec): min=96, max=278, avg=112.86, stdev= 7.75 00:11:27.519 clat percentiles (usec): 00:11:27.519 | 1.00th=[ 93], 5.00th=[ 96], 10.00th=[ 97], 20.00th=[ 99], 00:11:27.519 | 30.00th=[ 101], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 105], 00:11:27.519 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 113], 95.00th=[ 117], 00:11:27.519 | 99.00th=[ 128], 99.50th=[ 139], 99.90th=[ 167], 99.95th=[ 184], 00:11:27.519 | 99.99th=[ 273] 00:11:27.519 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:11:27.519 slat (usec): min=12, max=112, avg=14.55, stdev= 5.33 00:11:27.519 clat (usec): min=58, max=1417, avg=74.42, stdev=20.63 00:11:27.519 lat (usec): min=74, max=1431, avg=88.97, stdev=21.85 00:11:27.519 clat percentiles (usec): 00:11:27.519 | 1.00th=[ 64], 5.00th=[ 67], 10.00th=[ 68], 20.00th=[ 69], 00:11:27.519 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 75], 00:11:27.519 | 70.00th=[ 76], 80.00th=[ 79], 90.00th=[ 83], 95.00th=[ 89], 00:11:27.519 | 99.00th=[ 105], 99.50th=[ 112], 99.90th=[ 165], 99.95th=[ 215], 00:11:27.519 | 99.99th=[ 1418] 00:11:27.519 bw ( KiB/s): min=20480, max=20480, per=100.00%, avg=20480.00, stdev= 0.00, samples=1 00:11:27.519 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:27.519 lat (usec) : 100=63.50%, 250=36.48%, 500=0.01% 00:11:27.519 lat (msec) : 2=0.01% 00:11:27.519 cpu : usr=2.70%, sys=7.90%, ctx=9866, majf=0, minf=2 00:11:27.519 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.519 issued rwts: total=4746,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.519 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.519 00:11:27.519 Run status group 0 (all jobs): 00:11:27.519 READ: bw=18.5MiB/s (19.4MB/s), 18.5MiB/s-18.5MiB/s (19.4MB/s-19.4MB/s), io=18.5MiB (19.4MB), run=1001-1001msec 00:11:27.519 WRITE: bw=20.0MiB/s (20.9MB/s), 20.0MiB/s-20.0MiB/s (20.9MB/s-20.9MB/s), io=20.0MiB (21.0MB), run=1001-1001msec 00:11:27.519 00:11:27.519 Disk stats (read/write): 00:11:27.519 nvme0n1: ios=4334/4608, merge=0/0, ticks=469/385, in_queue=854, util=91.19% 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.519 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.777 rmmod nvme_tcp 00:11:27.777 rmmod nvme_fabrics 00:11:27.777 rmmod nvme_keyring 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76387 ']' 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76387 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76387 ']' 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76387 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76387 00:11:27.777 killing process with pid 76387 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76387' 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76387 00:11:27.777 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76387 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:28.034 00:11:28.034 real 0m5.922s 00:11:28.034 user 0m19.376s 00:11:28.034 sys 0m1.617s 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.034 ************************************ 00:11:28.034 END TEST nvmf_nmic 00:11:28.034 ************************************ 00:11:28.034 18:30:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:28.034 18:30:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:28.034 18:30:50 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:28.034 18:30:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:28.034 18:30:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.034 18:30:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.034 ************************************ 00:11:28.034 START TEST nvmf_fio_target 00:11:28.034 ************************************ 00:11:28.034 18:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:28.329 * Looking for test storage... 00:11:28.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.329 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:28.330 Cannot find device "nvmf_tgt_br" 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:28.330 Cannot find device "nvmf_tgt_br2" 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:28.330 Cannot find device "nvmf_tgt_br" 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:28.330 Cannot find device "nvmf_tgt_br2" 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:28.330 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:28.602 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:28.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.602 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:28.602 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:28.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.602 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:28.602 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:28.602 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:28.602 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:28.602 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:28.602 18:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:28.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:11:28.602 00:11:28.602 --- 10.0.0.2 ping statistics --- 00:11:28.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.602 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:28.602 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:28.602 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:11:28.602 00:11:28.602 --- 10.0.0.3 ping statistics --- 00:11:28.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.602 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:28.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:28.602 00:11:28.602 --- 10.0.0.1 ping statistics --- 00:11:28.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.602 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.602 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=76676 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 76676 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 76676 ']' 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.860 18:30:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.860 [2024-07-15 18:30:51.294477] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:11:28.860 [2024-07-15 18:30:51.294553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.860 [2024-07-15 18:30:51.435420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.117 [2024-07-15 18:30:51.523851] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.117 [2024-07-15 18:30:51.523904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.117 [2024-07-15 18:30:51.523913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.117 [2024-07-15 18:30:51.523922] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.117 [2024-07-15 18:30:51.523929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.117 [2024-07-15 18:30:51.524158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.117 [2024-07-15 18:30:51.524399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.117 [2024-07-15 18:30:51.525089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.117 [2024-07-15 18:30:51.525091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.683 18:30:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.683 18:30:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:29.683 18:30:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.683 18:30:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:29.683 18:30:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.683 18:30:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.683 18:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:29.941 [2024-07-15 18:30:52.362117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.941 18:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.199 18:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:30.199 18:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.457 18:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:30.457 18:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.457 18:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:30.457 18:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.715 18:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:30.715 18:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:30.972 18:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.230 18:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:31.230 18:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.505 18:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:31.505 18:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:31.780 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:31.780 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:32.037 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:32.037 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:32.037 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.294 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:32.294 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:32.553 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.812 [2024-07-15 18:30:55.178152] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.812 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:32.812 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:33.070 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.328 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:33.328 18:30:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:33.328 18:30:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.328 18:30:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:33.328 18:30:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:33.328 18:30:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:35.231 18:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:35.231 18:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:35.231 18:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.231 18:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:35.231 18:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.231 18:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:35.231 18:30:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:35.231 [global] 00:11:35.231 thread=1 00:11:35.231 invalidate=1 00:11:35.231 rw=write 00:11:35.231 time_based=1 00:11:35.231 runtime=1 00:11:35.231 ioengine=libaio 00:11:35.231 direct=1 00:11:35.231 bs=4096 00:11:35.231 iodepth=1 00:11:35.231 norandommap=0 00:11:35.231 numjobs=1 00:11:35.231 00:11:35.231 verify_dump=1 00:11:35.231 verify_backlog=512 00:11:35.231 verify_state_save=0 00:11:35.231 do_verify=1 00:11:35.231 verify=crc32c-intel 00:11:35.231 [job0] 00:11:35.231 filename=/dev/nvme0n1 00:11:35.231 [job1] 00:11:35.231 filename=/dev/nvme0n2 00:11:35.231 [job2] 00:11:35.231 filename=/dev/nvme0n3 00:11:35.231 [job3] 00:11:35.231 filename=/dev/nvme0n4 00:11:35.490 Could not set queue depth (nvme0n1) 00:11:35.490 Could not set queue depth (nvme0n2) 00:11:35.490 Could not set queue depth (nvme0n3) 00:11:35.490 Could not set queue depth (nvme0n4) 00:11:35.490 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.490 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.490 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.490 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.490 fio-3.35 00:11:35.490 Starting 4 threads 00:11:36.865 00:11:36.865 job0: (groupid=0, jobs=1): err= 0: pid=76958: Mon Jul 15 18:30:59 2024 00:11:36.865 read: IOPS=2479, BW=9918KiB/s (10.2MB/s)(9928KiB/1001msec) 00:11:36.865 slat (nsec): min=6706, max=24176, avg=8070.86, stdev=1268.04 00:11:36.865 clat (usec): min=141, max=766, avg=210.05, stdev=24.05 00:11:36.865 lat (usec): min=148, max=774, avg=218.12, stdev=23.91 00:11:36.865 clat percentiles (usec): 00:11:36.865 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:11:36.865 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:11:36.865 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 229], 95.00th=[ 235], 00:11:36.865 | 99.00th=[ 273], 99.50th=[ 314], 99.90th=[ 502], 99.95th=[ 603], 00:11:36.865 | 99.99th=[ 766] 00:11:36.865 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:36.865 slat (usec): min=7, max=168, avg=13.66, stdev= 7.64 00:11:36.865 clat (usec): min=3, max=570, avg=163.79, stdev=30.36 00:11:36.865 lat (usec): min=98, max=598, avg=177.45, stdev=30.40 00:11:36.865 clat percentiles (usec): 00:11:36.865 | 1.00th=[ 94], 5.00th=[ 111], 10.00th=[ 139], 20.00th=[ 145], 00:11:36.865 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 167], 00:11:36.865 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 194], 95.00th=[ 219], 00:11:36.865 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 371], 99.95th=[ 379], 00:11:36.865 | 99.99th=[ 570] 00:11:36.865 bw ( KiB/s): min=12288, max=12288, per=24.86%, avg=12288.00, stdev= 0.00, samples=1 00:11:36.865 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:36.865 lat (usec) : 4=0.04%, 100=1.37%, 250=96.99%, 500=1.53%, 750=0.06% 00:11:36.865 lat (usec) : 1000=0.02% 00:11:36.865 cpu : usr=0.90%, sys=4.80%, ctx=5045, majf=0, minf=11 00:11:36.865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.865 issued rwts: total=2482,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.865 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.865 job1: (groupid=0, jobs=1): err= 0: pid=76959: Mon Jul 15 18:30:59 2024 00:11:36.865 read: IOPS=2481, BW=9926KiB/s (10.2MB/s)(9936KiB/1001msec) 00:11:36.865 slat (nsec): min=5983, max=47822, avg=7010.79, stdev=1416.87 00:11:36.865 clat (usec): min=99, max=628, avg=211.15, stdev=22.61 00:11:36.865 lat (usec): min=107, max=634, avg=218.16, stdev=22.95 00:11:36.865 clat percentiles (usec): 00:11:36.865 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:11:36.865 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:11:36.865 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 229], 95.00th=[ 235], 00:11:36.865 | 99.00th=[ 277], 99.50th=[ 306], 99.90th=[ 490], 99.95th=[ 586], 00:11:36.865 | 99.99th=[ 627] 00:11:36.865 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:36.865 slat (nsec): min=7347, max=73081, avg=13576.62, stdev=6117.96 00:11:36.865 clat (usec): min=82, max=516, avg=163.77, stdev=29.90 00:11:36.865 lat (usec): min=96, max=558, avg=177.34, stdev=30.09 00:11:36.865 clat percentiles (usec): 00:11:36.865 | 1.00th=[ 94], 5.00th=[ 109], 10.00th=[ 137], 20.00th=[ 145], 00:11:36.865 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 167], 00:11:36.865 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 196], 95.00th=[ 217], 00:11:36.865 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 330], 99.95th=[ 347], 00:11:36.865 | 99.99th=[ 519] 00:11:36.865 bw ( KiB/s): min=12288, max=12288, per=24.86%, avg=12288.00, stdev= 0.00, samples=1 00:11:36.865 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:36.865 lat (usec) : 100=1.57%, 250=97.03%, 500=1.35%, 750=0.06% 00:11:36.865 cpu : usr=0.90%, sys=4.30%, ctx=5049, majf=0, minf=7 00:11:36.865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.865 issued rwts: total=2484,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.865 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.865 job2: (groupid=0, jobs=1): err= 0: pid=76960: Mon Jul 15 18:30:59 2024 00:11:36.865 read: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1001msec) 00:11:36.865 slat (nsec): min=8066, max=31996, avg=9885.29, stdev=3029.20 00:11:36.865 clat (usec): min=121, max=386, avg=141.73, stdev=10.47 00:11:36.865 lat (usec): min=130, max=395, avg=151.62, stdev=11.51 00:11:36.865 clat percentiles (usec): 00:11:36.865 | 1.00th=[ 127], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 135], 00:11:36.865 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:11:36.865 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 159], 00:11:36.865 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 367], 00:11:36.865 | 99.99th=[ 388] 00:11:36.865 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:36.865 slat (usec): min=12, max=205, avg=17.69, stdev= 8.87 00:11:36.865 clat (usec): min=88, max=608, avg=110.40, stdev=12.59 00:11:36.865 lat (usec): min=101, max=621, avg=128.09, stdev=17.11 00:11:36.865 clat percentiles (usec): 00:11:36.865 | 1.00th=[ 94], 5.00th=[ 97], 10.00th=[ 100], 20.00th=[ 102], 00:11:36.865 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:11:36.865 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 123], 95.00th=[ 128], 00:11:36.865 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 161], 99.95th=[ 163], 00:11:36.865 | 99.99th=[ 611] 00:11:36.865 bw ( KiB/s): min=15440, max=15440, per=31.23%, avg=15440.00, stdev= 0.00, samples=1 00:11:36.865 iops : min= 3860, max= 3860, avg=3860.00, stdev= 0.00, samples=1 00:11:36.865 lat (usec) : 100=5.84%, 250=94.11%, 500=0.03%, 750=0.01% 00:11:36.865 cpu : usr=1.50%, sys=7.60%, ctx=7101, majf=0, minf=8 00:11:36.865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.865 issued rwts: total=3517,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.865 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.865 job3: (groupid=0, jobs=1): err= 0: pid=76961: Mon Jul 15 18:30:59 2024 00:11:36.865 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:36.865 slat (nsec): min=8129, max=29449, avg=8995.69, stdev=1601.09 00:11:36.865 clat (usec): min=119, max=1462, avg=141.59, stdev=24.47 00:11:36.865 lat (usec): min=128, max=1471, avg=150.59, stdev=24.54 00:11:36.865 clat percentiles (usec): 00:11:36.865 | 1.00th=[ 126], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 133], 00:11:36.865 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 143], 00:11:36.865 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:11:36.865 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 196], 99.95th=[ 206], 00:11:36.865 | 99.99th=[ 1467] 00:11:36.865 write: IOPS=3664, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1001msec); 0 zone resets 00:11:36.865 slat (usec): min=12, max=159, avg=14.36, stdev= 4.42 00:11:36.865 clat (usec): min=79, max=186, avg=109.28, stdev= 9.71 00:11:36.865 lat (usec): min=93, max=317, avg=123.64, stdev=11.24 00:11:36.865 clat percentiles (usec): 00:11:36.865 | 1.00th=[ 93], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 102], 00:11:36.865 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:11:36.866 | 70.00th=[ 114], 80.00th=[ 117], 90.00th=[ 122], 95.00th=[ 127], 00:11:36.866 | 99.00th=[ 143], 99.50th=[ 147], 99.90th=[ 159], 99.95th=[ 167], 00:11:36.866 | 99.99th=[ 186] 00:11:36.866 bw ( KiB/s): min=16384, max=16384, per=33.14%, avg=16384.00, stdev= 0.00, samples=1 00:11:36.866 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:36.866 lat (usec) : 100=7.27%, 250=92.72% 00:11:36.866 lat (msec) : 2=0.01% 00:11:36.866 cpu : usr=1.60%, sys=6.30%, ctx=7253, majf=0, minf=9 00:11:36.866 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.866 issued rwts: total=3584,3668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.866 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.866 00:11:36.866 Run status group 0 (all jobs): 00:11:36.866 READ: bw=47.1MiB/s (49.4MB/s), 9918KiB/s-14.0MiB/s (10.2MB/s-14.7MB/s), io=47.1MiB (49.4MB), run=1001-1001msec 00:11:36.866 WRITE: bw=48.3MiB/s (50.6MB/s), 9.99MiB/s-14.3MiB/s (10.5MB/s-15.0MB/s), io=48.3MiB (50.7MB), run=1001-1001msec 00:11:36.866 00:11:36.866 Disk stats (read/write): 00:11:36.866 nvme0n1: ios=2098/2410, merge=0/0, ticks=451/401, in_queue=852, util=89.17% 00:11:36.866 nvme0n2: ios=2097/2410, merge=0/0, ticks=411/393, in_queue=804, util=89.08% 00:11:36.866 nvme0n3: ios=3100/3072, merge=0/0, ticks=471/365, in_queue=836, util=90.24% 00:11:36.866 nvme0n4: ios=3072/3249, merge=0/0, ticks=431/375, in_queue=806, util=89.78% 00:11:36.866 18:30:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:36.866 [global] 00:11:36.866 thread=1 00:11:36.866 invalidate=1 00:11:36.866 rw=randwrite 00:11:36.866 time_based=1 00:11:36.866 runtime=1 00:11:36.866 ioengine=libaio 00:11:36.866 direct=1 00:11:36.866 bs=4096 00:11:36.866 iodepth=1 00:11:36.866 norandommap=0 00:11:36.866 numjobs=1 00:11:36.866 00:11:36.866 verify_dump=1 00:11:36.866 verify_backlog=512 00:11:36.866 verify_state_save=0 00:11:36.866 do_verify=1 00:11:36.866 verify=crc32c-intel 00:11:36.866 [job0] 00:11:36.866 filename=/dev/nvme0n1 00:11:36.866 [job1] 00:11:36.866 filename=/dev/nvme0n2 00:11:36.866 [job2] 00:11:36.866 filename=/dev/nvme0n3 00:11:36.866 [job3] 00:11:36.866 filename=/dev/nvme0n4 00:11:36.866 Could not set queue depth (nvme0n1) 00:11:36.866 Could not set queue depth (nvme0n2) 00:11:36.866 Could not set queue depth (nvme0n3) 00:11:36.866 Could not set queue depth (nvme0n4) 00:11:36.866 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.866 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.866 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.866 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.866 fio-3.35 00:11:36.866 Starting 4 threads 00:11:38.268 00:11:38.268 job0: (groupid=0, jobs=1): err= 0: pid=77025: Mon Jul 15 18:31:00 2024 00:11:38.268 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:38.268 slat (nsec): min=8029, max=37315, avg=8903.13, stdev=1672.34 00:11:38.268 clat (usec): min=110, max=615, avg=137.21, stdev=19.11 00:11:38.268 lat (usec): min=119, max=623, avg=146.11, stdev=19.24 00:11:38.268 clat percentiles (usec): 00:11:38.268 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 126], 00:11:38.268 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:11:38.268 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 163], 95.00th=[ 174], 00:11:38.268 | 99.00th=[ 194], 99.50th=[ 206], 99.90th=[ 281], 99.95th=[ 330], 00:11:38.268 | 99.99th=[ 619] 00:11:38.268 write: IOPS=3774, BW=14.7MiB/s (15.5MB/s)(14.8MiB/1001msec); 0 zone resets 00:11:38.268 slat (usec): min=12, max=129, avg=15.13, stdev= 6.54 00:11:38.268 clat (usec): min=79, max=2204, avg=109.09, stdev=44.85 00:11:38.268 lat (usec): min=92, max=2221, avg=124.22, stdev=45.92 00:11:38.268 clat percentiles (usec): 00:11:38.268 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 94], 00:11:38.268 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 104], 00:11:38.268 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 129], 95.00th=[ 143], 00:11:38.268 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 383], 99.95th=[ 635], 00:11:38.268 | 99.99th=[ 2212] 00:11:38.268 bw ( KiB/s): min=16384, max=16384, per=32.70%, avg=16384.00, stdev= 0.00, samples=1 00:11:38.268 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:38.268 lat (usec) : 100=23.16%, 250=75.98%, 500=0.81%, 750=0.03% 00:11:38.268 lat (msec) : 4=0.01% 00:11:38.268 cpu : usr=1.50%, sys=6.80%, ctx=7365, majf=0, minf=17 00:11:38.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.268 issued rwts: total=3584,3778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.268 job1: (groupid=0, jobs=1): err= 0: pid=77026: Mon Jul 15 18:31:00 2024 00:11:38.268 read: IOPS=2065, BW=8264KiB/s (8462kB/s)(8272KiB/1001msec) 00:11:38.268 slat (nsec): min=6740, max=24548, avg=8380.42, stdev=1563.97 00:11:38.268 clat (usec): min=141, max=488, avg=238.32, stdev=27.70 00:11:38.268 lat (usec): min=148, max=496, avg=246.70, stdev=27.70 00:11:38.268 clat percentiles (usec): 00:11:38.269 | 1.00th=[ 165], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 223], 00:11:38.269 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:11:38.269 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 277], 00:11:38.269 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 437], 99.95th=[ 457], 00:11:38.269 | 99.99th=[ 490] 00:11:38.269 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:38.269 slat (usec): min=7, max=249, avg=12.99, stdev= 5.57 00:11:38.269 clat (usec): min=89, max=327, avg=176.81, stdev=26.79 00:11:38.269 lat (usec): min=103, max=468, avg=189.81, stdev=27.22 00:11:38.269 clat percentiles (usec): 00:11:38.269 | 1.00th=[ 100], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 161], 00:11:38.269 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:11:38.269 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 221], 00:11:38.269 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 310], 00:11:38.269 | 99.99th=[ 326] 00:11:38.269 bw ( KiB/s): min=10840, max=10840, per=21.63%, avg=10840.00, stdev= 0.00, samples=1 00:11:38.269 iops : min= 2710, max= 2710, avg=2710.00, stdev= 0.00, samples=1 00:11:38.269 lat (usec) : 100=0.58%, 250=87.62%, 500=11.80% 00:11:38.269 cpu : usr=1.50%, sys=3.60%, ctx=4630, majf=0, minf=12 00:11:38.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.269 issued rwts: total=2068,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.269 job2: (groupid=0, jobs=1): err= 0: pid=77027: Mon Jul 15 18:31:00 2024 00:11:38.269 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:38.269 slat (nsec): min=8045, max=27807, avg=8868.34, stdev=1381.47 00:11:38.269 clat (usec): min=120, max=1595, avg=142.08, stdev=25.76 00:11:38.269 lat (usec): min=128, max=1605, avg=150.95, stdev=25.84 00:11:38.269 clat percentiles (usec): 00:11:38.269 | 1.00th=[ 127], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 135], 00:11:38.269 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:11:38.269 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 157], 00:11:38.269 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 200], 00:11:38.269 | 99.99th=[ 1598] 00:11:38.269 write: IOPS=3638, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1001msec); 0 zone resets 00:11:38.269 slat (usec): min=12, max=107, avg=14.59, stdev= 5.80 00:11:38.269 clat (usec): min=86, max=395, avg=109.63, stdev=10.55 00:11:38.269 lat (usec): min=99, max=408, avg=124.21, stdev=13.17 00:11:38.269 clat percentiles (usec): 00:11:38.269 | 1.00th=[ 93], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 102], 00:11:38.269 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:11:38.269 | 70.00th=[ 114], 80.00th=[ 117], 90.00th=[ 123], 95.00th=[ 127], 00:11:38.269 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 165], 99.95th=[ 208], 00:11:38.269 | 99.99th=[ 396] 00:11:38.269 bw ( KiB/s): min=16384, max=16384, per=32.70%, avg=16384.00, stdev= 0.00, samples=1 00:11:38.269 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:38.269 lat (usec) : 100=6.77%, 250=93.21%, 500=0.01% 00:11:38.269 lat (msec) : 2=0.01% 00:11:38.269 cpu : usr=1.80%, sys=6.20%, ctx=7226, majf=0, minf=7 00:11:38.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.269 issued rwts: total=3584,3642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.269 job3: (groupid=0, jobs=1): err= 0: pid=77028: Mon Jul 15 18:31:00 2024 00:11:38.269 read: IOPS=2065, BW=8264KiB/s (8462kB/s)(8272KiB/1001msec) 00:11:38.269 slat (nsec): min=6055, max=25482, avg=7899.85, stdev=1944.09 00:11:38.269 clat (usec): min=142, max=495, avg=238.62, stdev=25.99 00:11:38.269 lat (usec): min=149, max=504, avg=246.52, stdev=26.41 00:11:38.269 clat percentiles (usec): 00:11:38.269 | 1.00th=[ 174], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 223], 00:11:38.269 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:11:38.269 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:11:38.269 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 429], 99.95th=[ 469], 00:11:38.269 | 99.99th=[ 498] 00:11:38.269 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:38.269 slat (nsec): min=7404, max=37505, avg=13134.39, stdev=3240.57 00:11:38.269 clat (usec): min=100, max=400, avg=176.82, stdev=25.51 00:11:38.269 lat (usec): min=113, max=421, avg=189.95, stdev=25.53 00:11:38.269 clat percentiles (usec): 00:11:38.269 | 1.00th=[ 112], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 161], 00:11:38.269 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:11:38.269 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 217], 00:11:38.269 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 355], 99.95th=[ 375], 00:11:38.269 | 99.99th=[ 400] 00:11:38.269 bw ( KiB/s): min=10832, max=10832, per=21.62%, avg=10832.00, stdev= 0.00, samples=1 00:11:38.269 iops : min= 2708, max= 2708, avg=2708.00, stdev= 0.00, samples=1 00:11:38.269 lat (usec) : 250=88.57%, 500=11.43% 00:11:38.269 cpu : usr=1.40%, sys=3.60%, ctx=4628, majf=0, minf=9 00:11:38.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.269 issued rwts: total=2068,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.269 00:11:38.269 Run status group 0 (all jobs): 00:11:38.269 READ: bw=44.1MiB/s (46.3MB/s), 8264KiB/s-14.0MiB/s (8462kB/s-14.7MB/s), io=44.2MiB (46.3MB), run=1001-1001msec 00:11:38.269 WRITE: bw=48.9MiB/s (51.3MB/s), 9.99MiB/s-14.7MiB/s (10.5MB/s-15.5MB/s), io=49.0MiB (51.4MB), run=1001-1001msec 00:11:38.269 00:11:38.269 Disk stats (read/write): 00:11:38.269 nvme0n1: ios=3122/3308, merge=0/0, ticks=467/390, in_queue=857, util=89.48% 00:11:38.269 nvme0n2: ios=2002/2048, merge=0/0, ticks=483/359, in_queue=842, util=89.30% 00:11:38.269 nvme0n3: ios=3103/3249, merge=0/0, ticks=466/371, in_queue=837, util=90.17% 00:11:38.269 nvme0n4: ios=1953/2048, merge=0/0, ticks=457/369, in_queue=826, util=89.83% 00:11:38.269 18:31:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:38.269 [global] 00:11:38.269 thread=1 00:11:38.269 invalidate=1 00:11:38.269 rw=write 00:11:38.269 time_based=1 00:11:38.269 runtime=1 00:11:38.269 ioengine=libaio 00:11:38.269 direct=1 00:11:38.269 bs=4096 00:11:38.269 iodepth=128 00:11:38.269 norandommap=0 00:11:38.269 numjobs=1 00:11:38.269 00:11:38.269 verify_dump=1 00:11:38.269 verify_backlog=512 00:11:38.269 verify_state_save=0 00:11:38.269 do_verify=1 00:11:38.269 verify=crc32c-intel 00:11:38.269 [job0] 00:11:38.269 filename=/dev/nvme0n1 00:11:38.269 [job1] 00:11:38.269 filename=/dev/nvme0n2 00:11:38.269 [job2] 00:11:38.269 filename=/dev/nvme0n3 00:11:38.269 [job3] 00:11:38.269 filename=/dev/nvme0n4 00:11:38.269 Could not set queue depth (nvme0n1) 00:11:38.269 Could not set queue depth (nvme0n2) 00:11:38.269 Could not set queue depth (nvme0n3) 00:11:38.269 Could not set queue depth (nvme0n4) 00:11:38.269 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.269 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.269 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.269 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.269 fio-3.35 00:11:38.269 Starting 4 threads 00:11:39.645 00:11:39.645 job0: (groupid=0, jobs=1): err= 0: pid=77081: Mon Jul 15 18:31:02 2024 00:11:39.645 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:11:39.645 slat (usec): min=5, max=7649, avg=171.22, stdev=803.11 00:11:39.645 clat (usec): min=437, max=31822, avg=22200.63, stdev=4360.25 00:11:39.645 lat (usec): min=4033, max=31842, avg=22371.85, stdev=4330.16 00:11:39.645 clat percentiles (usec): 00:11:39.645 | 1.00th=[ 4948], 5.00th=[15401], 10.00th=[17433], 20.00th=[18482], 00:11:39.645 | 30.00th=[20579], 40.00th=[21365], 50.00th=[21890], 60.00th=[23200], 00:11:39.645 | 70.00th=[24511], 80.00th=[26346], 90.00th=[27657], 95.00th=[28705], 00:11:39.645 | 99.00th=[31851], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:11:39.645 | 99.99th=[31851] 00:11:39.645 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:39.645 slat (usec): min=8, max=8546, avg=142.80, stdev=539.52 00:11:39.645 clat (usec): min=11893, max=31671, avg=18904.85, stdev=4465.16 00:11:39.645 lat (usec): min=12274, max=31707, avg=19047.65, stdev=4483.89 00:11:39.645 clat percentiles (usec): 00:11:39.645 | 1.00th=[12649], 5.00th=[13960], 10.00th=[14222], 20.00th=[14746], 00:11:39.645 | 30.00th=[15533], 40.00th=[16909], 50.00th=[17957], 60.00th=[19268], 00:11:39.645 | 70.00th=[20317], 80.00th=[22676], 90.00th=[25560], 95.00th=[28443], 00:11:39.645 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31589], 99.95th=[31589], 00:11:39.645 | 99.99th=[31589] 00:11:39.645 bw ( KiB/s): min=12288, max=12312, per=16.43%, avg=12300.00, stdev=16.97, samples=2 00:11:39.645 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:11:39.645 lat (usec) : 500=0.02% 00:11:39.645 lat (msec) : 10=0.64%, 20=46.95%, 50=52.40% 00:11:39.645 cpu : usr=3.49%, sys=13.07%, ctx=584, majf=0, minf=8 00:11:39.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:39.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.645 issued rwts: total=3069,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.645 job1: (groupid=0, jobs=1): err= 0: pid=77082: Mon Jul 15 18:31:02 2024 00:11:39.645 read: IOPS=5788, BW=22.6MiB/s (23.7MB/s)(22.6MiB/1001msec) 00:11:39.645 slat (usec): min=5, max=4619, avg=85.21, stdev=388.36 00:11:39.645 clat (usec): min=685, max=32776, avg=11106.91, stdev=4264.52 00:11:39.645 lat (usec): min=705, max=32830, avg=11192.12, stdev=4298.08 00:11:39.645 clat percentiles (usec): 00:11:39.645 | 1.00th=[ 6521], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9372], 00:11:39.645 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:11:39.645 | 70.00th=[10683], 80.00th=[11207], 90.00th=[12780], 95.00th=[22938], 00:11:39.645 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30540], 99.95th=[32375], 00:11:39.645 | 99.99th=[32900] 00:11:39.645 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:11:39.645 slat (usec): min=7, max=4417, avg=73.17, stdev=252.24 00:11:39.645 clat (usec): min=6079, max=23500, avg=10139.39, stdev=2034.33 00:11:39.645 lat (usec): min=6099, max=23529, avg=10212.55, stdev=2040.61 00:11:39.645 clat percentiles (usec): 00:11:39.645 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 8291], 20.00th=[ 9241], 00:11:39.645 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9896], 00:11:39.645 | 70.00th=[10028], 80.00th=[10290], 90.00th=[12125], 95.00th=[15401], 00:11:39.645 | 99.00th=[17957], 99.50th=[18482], 99.90th=[20055], 99.95th=[20055], 00:11:39.645 | 99.99th=[23462] 00:11:39.645 bw ( KiB/s): min=22728, max=26476, per=32.87%, avg=24602.00, stdev=2650.24, samples=2 00:11:39.645 iops : min= 5682, max= 6619, avg=6150.50, stdev=662.56, samples=2 00:11:39.645 lat (usec) : 750=0.03%, 1000=0.02% 00:11:39.645 lat (msec) : 2=0.02%, 4=0.17%, 10=58.12%, 20=38.48%, 50=3.17% 00:11:39.645 cpu : usr=5.30%, sys=19.80%, ctx=785, majf=0, minf=1 00:11:39.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:39.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.645 issued rwts: total=5794,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.645 job2: (groupid=0, jobs=1): err= 0: pid=77083: Mon Jul 15 18:31:02 2024 00:11:39.645 read: IOPS=4179, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1004msec) 00:11:39.645 slat (usec): min=9, max=6522, avg=115.53, stdev=527.84 00:11:39.645 clat (usec): min=507, max=31434, avg=15364.87, stdev=5546.60 00:11:39.645 lat (usec): min=3795, max=31453, avg=15480.40, stdev=5569.11 00:11:39.645 clat percentiles (usec): 00:11:39.645 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[11076], 00:11:39.645 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12911], 00:11:39.645 | 70.00th=[18744], 80.00th=[21103], 90.00th=[23200], 95.00th=[26608], 00:11:39.645 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:11:39.645 | 99.99th=[31327] 00:11:39.645 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:11:39.645 slat (usec): min=21, max=4653, avg=100.95, stdev=397.34 00:11:39.645 clat (usec): min=9325, max=26621, avg=13472.81, stdev=3697.54 00:11:39.645 lat (usec): min=9382, max=26656, avg=13573.76, stdev=3711.26 00:11:39.645 clat percentiles (usec): 00:11:39.645 | 1.00th=[ 9634], 5.00th=[10028], 10.00th=[10159], 20.00th=[10421], 00:11:39.645 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:11:39.645 | 70.00th=[15401], 80.00th=[16909], 90.00th=[19268], 95.00th=[20841], 00:11:39.645 | 99.00th=[23725], 99.50th=[24773], 99.90th=[26608], 99.95th=[26608], 00:11:39.645 | 99.99th=[26608] 00:11:39.645 bw ( KiB/s): min=12560, max=24080, per=24.48%, avg=18320.00, stdev=8145.87, samples=2 00:11:39.646 iops : min= 3140, max= 6020, avg=4580.00, stdev=2036.47, samples=2 00:11:39.646 lat (usec) : 750=0.01% 00:11:39.646 lat (msec) : 4=0.03%, 10=3.65%, 20=81.22%, 50=15.08% 00:11:39.646 cpu : usr=5.98%, sys=17.75%, ctx=532, majf=0, minf=3 00:11:39.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:39.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.646 issued rwts: total=4196,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.646 job3: (groupid=0, jobs=1): err= 0: pid=77084: Mon Jul 15 18:31:02 2024 00:11:39.646 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:11:39.646 slat (usec): min=5, max=7314, avg=105.45, stdev=472.28 00:11:39.646 clat (usec): min=9010, max=30249, avg=14228.26, stdev=5256.47 00:11:39.646 lat (usec): min=9031, max=30298, avg=14333.71, stdev=5285.81 00:11:39.646 clat percentiles (usec): 00:11:39.646 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10945], 00:11:39.646 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12387], 00:11:39.646 | 70.00th=[12780], 80.00th=[19006], 90.00th=[24511], 95.00th=[26346], 00:11:39.646 | 99.00th=[28181], 99.50th=[28181], 99.90th=[30016], 99.95th=[30278], 00:11:39.646 | 99.99th=[30278] 00:11:39.646 write: IOPS=4953, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1002msec); 0 zone resets 00:11:39.646 slat (usec): min=9, max=3938, avg=92.09, stdev=326.59 00:11:39.646 clat (usec): min=263, max=24033, avg=12303.83, stdev=3245.30 00:11:39.646 lat (usec): min=2051, max=24096, avg=12395.92, stdev=3262.80 00:11:39.646 clat percentiles (usec): 00:11:39.646 | 1.00th=[ 5866], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9896], 00:11:39.646 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:11:39.646 | 70.00th=[12125], 80.00th=[13829], 90.00th=[17695], 95.00th=[19530], 00:11:39.646 | 99.00th=[21365], 99.50th=[22152], 99.90th=[23462], 99.95th=[23462], 00:11:39.646 | 99.99th=[23987] 00:11:39.646 bw ( KiB/s): min=15264, max=23462, per=25.87%, avg=19363.00, stdev=5796.86, samples=2 00:11:39.646 iops : min= 3816, max= 5865, avg=4840.50, stdev=1448.86, samples=2 00:11:39.646 lat (usec) : 500=0.01% 00:11:39.646 lat (msec) : 4=0.43%, 10=14.77%, 20=73.90%, 50=10.89% 00:11:39.646 cpu : usr=6.29%, sys=19.28%, ctx=728, majf=0, minf=1 00:11:39.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:39.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.646 issued rwts: total=4608,4963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.646 00:11:39.646 Run status group 0 (all jobs): 00:11:39.646 READ: bw=68.7MiB/s (72.1MB/s), 12.0MiB/s-22.6MiB/s (12.5MB/s-23.7MB/s), io=69.0MiB (72.4MB), run=1001-1004msec 00:11:39.646 WRITE: bw=73.1MiB/s (76.6MB/s), 12.0MiB/s-24.0MiB/s (12.5MB/s-25.1MB/s), io=73.4MiB (77.0MB), run=1001-1004msec 00:11:39.646 00:11:39.646 Disk stats (read/write): 00:11:39.646 nvme0n1: ios=2610/2594, merge=0/0, ticks=13724/10698, in_queue=24422, util=87.69% 00:11:39.646 nvme0n2: ios=5344/5632, merge=0/0, ticks=24172/21443, in_queue=45615, util=88.47% 00:11:39.646 nvme0n3: ios=3590/3751, merge=0/0, ticks=13327/10336, in_queue=23663, util=89.38% 00:11:39.646 nvme0n4: ios=3914/4096, merge=0/0, ticks=14902/11995, in_queue=26897, util=89.33% 00:11:39.646 18:31:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:39.646 [global] 00:11:39.646 thread=1 00:11:39.646 invalidate=1 00:11:39.646 rw=randwrite 00:11:39.646 time_based=1 00:11:39.646 runtime=1 00:11:39.646 ioengine=libaio 00:11:39.646 direct=1 00:11:39.646 bs=4096 00:11:39.646 iodepth=128 00:11:39.646 norandommap=0 00:11:39.646 numjobs=1 00:11:39.646 00:11:39.646 verify_dump=1 00:11:39.646 verify_backlog=512 00:11:39.646 verify_state_save=0 00:11:39.646 do_verify=1 00:11:39.646 verify=crc32c-intel 00:11:39.646 [job0] 00:11:39.646 filename=/dev/nvme0n1 00:11:39.646 [job1] 00:11:39.646 filename=/dev/nvme0n2 00:11:39.646 [job2] 00:11:39.646 filename=/dev/nvme0n3 00:11:39.646 [job3] 00:11:39.646 filename=/dev/nvme0n4 00:11:39.646 Could not set queue depth (nvme0n1) 00:11:39.646 Could not set queue depth (nvme0n2) 00:11:39.646 Could not set queue depth (nvme0n3) 00:11:39.646 Could not set queue depth (nvme0n4) 00:11:39.905 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.905 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.905 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.905 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.905 fio-3.35 00:11:39.905 Starting 4 threads 00:11:41.284 00:11:41.284 job0: (groupid=0, jobs=1): err= 0: pid=77144: Mon Jul 15 18:31:03 2024 00:11:41.284 read: IOPS=2310, BW=9244KiB/s (9465kB/s)(9336KiB/1010msec) 00:11:41.284 slat (usec): min=7, max=12394, avg=200.08, stdev=984.21 00:11:41.284 clat (usec): min=9445, max=50929, avg=28336.35, stdev=8820.06 00:11:41.284 lat (usec): min=9464, max=50956, avg=28536.43, stdev=8864.42 00:11:41.284 clat percentiles (usec): 00:11:41.284 | 1.00th=[10814], 5.00th=[12125], 10.00th=[15664], 20.00th=[21890], 00:11:41.284 | 30.00th=[24773], 40.00th=[25822], 50.00th=[27395], 60.00th=[28967], 00:11:41.284 | 70.00th=[31851], 80.00th=[36439], 90.00th=[40109], 95.00th=[45351], 00:11:41.284 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:11:41.284 | 99.99th=[51119] 00:11:41.284 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:11:41.284 slat (usec): min=9, max=23712, avg=195.30, stdev=966.52 00:11:41.284 clat (usec): min=13188, max=54751, avg=23903.62, stdev=6617.16 00:11:41.284 lat (usec): min=13221, max=54813, avg=24098.92, stdev=6698.24 00:11:41.284 clat percentiles (usec): 00:11:41.284 | 1.00th=[14484], 5.00th=[16057], 10.00th=[16909], 20.00th=[18744], 00:11:41.284 | 30.00th=[19792], 40.00th=[20841], 50.00th=[22676], 60.00th=[23987], 00:11:41.284 | 70.00th=[25822], 80.00th=[28181], 90.00th=[33817], 95.00th=[40109], 00:11:41.284 | 99.00th=[42206], 99.50th=[42206], 99.90th=[53740], 99.95th=[54264], 00:11:41.284 | 99.99th=[54789] 00:11:41.284 bw ( KiB/s): min=10152, max=10328, per=15.87%, avg=10240.00, stdev=124.45, samples=2 00:11:41.284 iops : min= 2538, max= 2582, avg=2560.00, stdev=31.11, samples=2 00:11:41.284 lat (msec) : 10=0.18%, 20=24.40%, 50=75.30%, 100=0.12% 00:11:41.284 cpu : usr=3.27%, sys=10.51%, ctx=708, majf=0, minf=11 00:11:41.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:41.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.284 issued rwts: total=2334,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.284 job1: (groupid=0, jobs=1): err= 0: pid=77145: Mon Jul 15 18:31:03 2024 00:11:41.284 read: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec) 00:11:41.284 slat (usec): min=8, max=4324, avg=58.46, stdev=234.33 00:11:41.284 clat (usec): min=2981, max=12391, avg=8385.28, stdev=1071.41 00:11:41.284 lat (usec): min=3005, max=13478, avg=8443.73, stdev=1080.85 00:11:41.284 clat percentiles (usec): 00:11:41.284 | 1.00th=[ 5604], 5.00th=[ 6652], 10.00th=[ 7177], 20.00th=[ 7635], 00:11:41.284 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8586], 00:11:41.284 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[10159], 00:11:41.284 | 99.00th=[11207], 99.50th=[11338], 99.90th=[12125], 99.95th=[12256], 00:11:41.284 | 99.99th=[12387] 00:11:41.284 write: IOPS=7837, BW=30.6MiB/s (32.1MB/s)(30.7MiB/1004msec); 0 zone resets 00:11:41.284 slat (usec): min=16, max=3745, avg=59.20, stdev=210.56 00:11:41.284 clat (usec): min=2967, max=13437, avg=7962.73, stdev=1237.84 00:11:41.284 lat (usec): min=2998, max=13471, avg=8021.93, stdev=1240.19 00:11:41.284 clat percentiles (usec): 00:11:41.284 | 1.00th=[ 4686], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7111], 00:11:41.284 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8094], 00:11:41.284 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9765], 95.00th=[10028], 00:11:41.284 | 99.00th=[12125], 99.50th=[12387], 99.90th=[12518], 99.95th=[12518], 00:11:41.284 | 99.99th=[13435] 00:11:41.284 bw ( KiB/s): min=29835, max=32152, per=48.03%, avg=30993.50, stdev=1638.37, samples=2 00:11:41.284 iops : min= 7458, max= 8038, avg=7748.00, stdev=410.12, samples=2 00:11:41.284 lat (msec) : 4=0.25%, 10=93.95%, 20=5.80% 00:11:41.284 cpu : usr=8.47%, sys=28.91%, ctx=789, majf=0, minf=13 00:11:41.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:41.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.284 issued rwts: total=7680,7869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.284 job2: (groupid=0, jobs=1): err= 0: pid=77146: Mon Jul 15 18:31:03 2024 00:11:41.284 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:11:41.284 slat (usec): min=7, max=15195, avg=204.58, stdev=1003.03 00:11:41.284 clat (usec): min=14358, max=50241, avg=25074.78, stdev=7675.59 00:11:41.284 lat (usec): min=14386, max=50285, avg=25279.37, stdev=7746.28 00:11:41.284 clat percentiles (usec): 00:11:41.284 | 1.00th=[14615], 5.00th=[14877], 10.00th=[16057], 20.00th=[18220], 00:11:41.284 | 30.00th=[20055], 40.00th=[21627], 50.00th=[23725], 60.00th=[26084], 00:11:41.284 | 70.00th=[28967], 80.00th=[31589], 90.00th=[35914], 95.00th=[38011], 00:11:41.284 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[50070], 00:11:41.284 | 99.99th=[50070] 00:11:41.284 write: IOPS=2768, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1009msec); 0 zone resets 00:11:41.284 slat (usec): min=9, max=10908, avg=159.54, stdev=640.29 00:11:41.284 clat (usec): min=5290, max=50727, avg=22795.68, stdev=9320.75 00:11:41.284 lat (usec): min=6222, max=52972, avg=22955.22, stdev=9362.64 00:11:41.284 clat percentiles (usec): 00:11:41.284 | 1.00th=[ 7308], 5.00th=[10814], 10.00th=[13173], 20.00th=[14746], 00:11:41.284 | 30.00th=[16450], 40.00th=[19530], 50.00th=[20841], 60.00th=[22676], 00:11:41.284 | 70.00th=[26084], 80.00th=[30802], 90.00th=[38011], 95.00th=[40633], 00:11:41.284 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:11:41.284 | 99.99th=[50594] 00:11:41.284 bw ( KiB/s): min= 9032, max=12312, per=16.54%, avg=10672.00, stdev=2319.31, samples=2 00:11:41.284 iops : min= 2258, max= 3078, avg=2668.00, stdev=579.83, samples=2 00:11:41.284 lat (msec) : 10=1.48%, 20=35.91%, 50=62.38%, 100=0.24% 00:11:41.284 cpu : usr=2.38%, sys=12.60%, ctx=716, majf=0, minf=15 00:11:41.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:41.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.284 issued rwts: total=2560,2793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.284 job3: (groupid=0, jobs=1): err= 0: pid=77147: Mon Jul 15 18:31:03 2024 00:11:41.284 read: IOPS=2597, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1009msec) 00:11:41.284 slat (usec): min=5, max=11745, avg=139.75, stdev=758.32 00:11:41.284 clat (usec): min=8234, max=38976, avg=16266.93, stdev=4197.07 00:11:41.284 lat (usec): min=8253, max=38997, avg=16406.68, stdev=4275.20 00:11:41.284 clat percentiles (usec): 00:11:41.284 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[11469], 20.00th=[11863], 00:11:41.284 | 30.00th=[14484], 40.00th=[15270], 50.00th=[15795], 60.00th=[16909], 00:11:41.284 | 70.00th=[17695], 80.00th=[19268], 90.00th=[21103], 95.00th=[24773], 00:11:41.284 | 99.00th=[29492], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:11:41.284 | 99.99th=[39060] 00:11:41.284 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:11:41.284 slat (usec): min=7, max=14223, avg=195.73, stdev=698.37 00:11:41.284 clat (usec): min=3973, max=62194, avg=27728.57, stdev=12676.87 00:11:41.284 lat (usec): min=4014, max=62209, avg=27924.30, stdev=12760.04 00:11:41.284 clat percentiles (usec): 00:11:41.284 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10421], 20.00th=[11076], 00:11:41.284 | 30.00th=[19530], 40.00th=[26608], 50.00th=[29230], 60.00th=[31589], 00:11:41.284 | 70.00th=[35390], 80.00th=[38011], 90.00th=[43779], 95.00th=[48497], 00:11:41.284 | 99.00th=[57934], 99.50th=[60556], 99.90th=[61604], 99.95th=[61604], 00:11:41.284 | 99.99th=[62129] 00:11:41.284 bw ( KiB/s): min=10936, max=13138, per=18.65%, avg=12037.00, stdev=1557.05, samples=2 00:11:41.284 iops : min= 2734, max= 3284, avg=3009.00, stdev=388.91, samples=2 00:11:41.284 lat (msec) : 4=0.05%, 10=3.69%, 20=51.45%, 50=43.42%, 100=1.39% 00:11:41.284 cpu : usr=2.48%, sys=12.20%, ctx=406, majf=0, minf=11 00:11:41.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:41.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.284 issued rwts: total=2621,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.284 00:11:41.284 Run status group 0 (all jobs): 00:11:41.284 READ: bw=58.8MiB/s (61.6MB/s), 9244KiB/s-29.9MiB/s (9465kB/s-31.3MB/s), io=59.4MiB (62.2MB), run=1004-1010msec 00:11:41.284 WRITE: bw=63.0MiB/s (66.1MB/s), 9.90MiB/s-30.6MiB/s (10.4MB/s-32.1MB/s), io=63.6MiB (66.7MB), run=1004-1010msec 00:11:41.284 00:11:41.284 Disk stats (read/write): 00:11:41.284 nvme0n1: ios=2098/2063, merge=0/0, ticks=17496/17961, in_queue=35457, util=88.58% 00:11:41.284 nvme0n2: ios=6619/6656, merge=0/0, ticks=24964/18966, in_queue=43930, util=88.97% 00:11:41.284 nvme0n3: ios=2107/2560, merge=0/0, ticks=21881/22091, in_queue=43972, util=88.69% 00:11:41.284 nvme0n4: ios=2505/2560, merge=0/0, ticks=22382/36243, in_queue=58625, util=89.98% 00:11:41.284 18:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:41.284 18:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77161 00:11:41.284 18:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:41.284 18:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:41.284 [global] 00:11:41.284 thread=1 00:11:41.284 invalidate=1 00:11:41.284 rw=read 00:11:41.284 time_based=1 00:11:41.284 runtime=10 00:11:41.284 ioengine=libaio 00:11:41.284 direct=1 00:11:41.284 bs=4096 00:11:41.284 iodepth=1 00:11:41.284 norandommap=1 00:11:41.284 numjobs=1 00:11:41.284 00:11:41.284 [job0] 00:11:41.284 filename=/dev/nvme0n1 00:11:41.284 [job1] 00:11:41.284 filename=/dev/nvme0n2 00:11:41.284 [job2] 00:11:41.284 filename=/dev/nvme0n3 00:11:41.284 [job3] 00:11:41.284 filename=/dev/nvme0n4 00:11:41.284 Could not set queue depth (nvme0n1) 00:11:41.284 Could not set queue depth (nvme0n2) 00:11:41.284 Could not set queue depth (nvme0n3) 00:11:41.284 Could not set queue depth (nvme0n4) 00:11:41.284 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.284 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.284 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.284 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.284 fio-3.35 00:11:41.284 Starting 4 threads 00:11:44.574 18:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:44.574 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=65642496, buflen=4096 00:11:44.574 fio: pid=77209, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:44.574 18:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:44.574 fio: pid=77208, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:44.574 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=81707008, buflen=4096 00:11:44.574 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:44.574 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:44.833 fio: pid=77206, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:44.833 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=22020096, buflen=4096 00:11:44.833 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:44.833 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:44.833 fio: pid=77207, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:44.833 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=17219584, buflen=4096 00:11:45.092 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:45.092 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:45.092 00:11:45.092 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77206: Mon Jul 15 18:31:07 2024 00:11:45.092 read: IOPS=6604, BW=25.8MiB/s (27.0MB/s)(85.0MiB/3295msec) 00:11:45.092 slat (usec): min=7, max=18845, avg=11.19, stdev=186.71 00:11:45.092 clat (usec): min=45, max=3691, avg=139.58, stdev=33.09 00:11:45.092 lat (usec): min=99, max=19168, avg=150.76, stdev=190.99 00:11:45.092 clat percentiles (usec): 00:11:45.092 | 1.00th=[ 114], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 129], 00:11:45.092 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:11:45.092 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 172], 00:11:45.092 | 99.00th=[ 196], 99.50th=[ 227], 99.90th=[ 367], 99.95th=[ 396], 00:11:45.092 | 99.99th=[ 791] 00:11:45.092 bw ( KiB/s): min=23208, max=28192, per=29.58%, avg=26328.00, stdev=1802.61, samples=6 00:11:45.092 iops : min= 5802, max= 7048, avg=6582.00, stdev=450.65, samples=6 00:11:45.092 lat (usec) : 50=0.01%, 100=0.07%, 250=99.55%, 500=0.33%, 750=0.02% 00:11:45.092 lat (usec) : 1000=0.01% 00:11:45.092 lat (msec) : 2=0.01%, 4=0.01% 00:11:45.092 cpu : usr=1.18%, sys=5.19%, ctx=21769, majf=0, minf=1 00:11:45.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.092 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.092 issued rwts: total=21761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.092 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.092 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77207: Mon Jul 15 18:31:07 2024 00:11:45.092 read: IOPS=5849, BW=22.8MiB/s (24.0MB/s)(80.4MiB/3520msec) 00:11:45.092 slat (usec): min=5, max=16485, avg=11.47, stdev=158.78 00:11:45.092 clat (usec): min=56, max=24353, avg=158.78, stdev=174.41 00:11:45.092 lat (usec): min=102, max=24363, avg=170.24, stdev=236.29 00:11:45.092 clat percentiles (usec): 00:11:45.092 | 1.00th=[ 106], 5.00th=[ 119], 10.00th=[ 130], 20.00th=[ 135], 00:11:45.092 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 151], 00:11:45.092 | 70.00th=[ 157], 80.00th=[ 178], 90.00th=[ 219], 95.00th=[ 229], 00:11:45.092 | 99.00th=[ 245], 99.50th=[ 253], 99.90th=[ 461], 99.95th=[ 635], 00:11:45.092 | 99.99th=[ 1827] 00:11:45.092 bw ( KiB/s): min=17552, max=26264, per=26.38%, avg=23478.83, stdev=3823.71, samples=6 00:11:45.092 iops : min= 4388, max= 6566, avg=5869.67, stdev=955.98, samples=6 00:11:45.092 lat (usec) : 100=0.13%, 250=99.23%, 500=0.56%, 750=0.04%, 1000=0.01% 00:11:45.092 lat (msec) : 2=0.02%, 4=0.01%, 50=0.01% 00:11:45.092 cpu : usr=1.22%, sys=4.69%, ctx=20600, majf=0, minf=1 00:11:45.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.092 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.092 issued rwts: total=20589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.092 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.092 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77208: Mon Jul 15 18:31:07 2024 00:11:45.092 read: IOPS=6418, BW=25.1MiB/s (26.3MB/s)(77.9MiB/3108msec) 00:11:45.092 slat (usec): min=7, max=7839, avg= 9.67, stdev=76.33 00:11:45.092 clat (usec): min=108, max=1887, avg=145.42, stdev=31.97 00:11:45.092 lat (usec): min=116, max=7985, avg=155.10, stdev=82.89 00:11:45.092 clat percentiles (usec): 00:11:45.092 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:11:45.092 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:11:45.092 | 70.00th=[ 149], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:11:45.092 | 99.00th=[ 194], 99.50th=[ 253], 99.90th=[ 408], 99.95th=[ 701], 00:11:45.092 | 99.99th=[ 1876] 00:11:45.092 bw ( KiB/s): min=25680, max=26376, per=29.15%, avg=25945.60, stdev=260.36, samples=5 00:11:45.092 iops : min= 6420, max= 6594, avg=6486.40, stdev=65.09, samples=5 00:11:45.092 lat (usec) : 250=99.46%, 500=0.46%, 750=0.03%, 1000=0.02% 00:11:45.092 lat (msec) : 2=0.03% 00:11:45.092 cpu : usr=0.84%, sys=4.99%, ctx=19954, majf=0, minf=1 00:11:45.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.092 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.092 issued rwts: total=19949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.092 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.092 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77209: Mon Jul 15 18:31:07 2024 00:11:45.092 read: IOPS=5685, BW=22.2MiB/s (23.3MB/s)(62.6MiB/2819msec) 00:11:45.092 slat (usec): min=5, max=101, avg= 8.45, stdev= 2.53 00:11:45.092 clat (usec): min=112, max=7584, avg=166.73, stdev=131.89 00:11:45.092 lat (usec): min=120, max=7593, avg=175.18, stdev=131.86 00:11:45.092 clat percentiles (usec): 00:11:45.092 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:11:45.092 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:11:45.092 | 70.00th=[ 161], 80.00th=[ 204], 90.00th=[ 223], 95.00th=[ 231], 00:11:45.092 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 627], 99.95th=[ 2900], 00:11:45.092 | 99.99th=[ 7373] 00:11:45.092 bw ( KiB/s): min=17336, max=25776, per=26.34%, avg=23441.60, stdev=3465.08, samples=5 00:11:45.092 iops : min= 4334, max= 6444, avg=5860.40, stdev=866.27, samples=5 00:11:45.092 lat (usec) : 250=99.31%, 500=0.57%, 750=0.02%, 1000=0.01% 00:11:45.092 lat (msec) : 2=0.01%, 4=0.04%, 10=0.03% 00:11:45.092 cpu : usr=0.92%, sys=4.36%, ctx=16028, majf=0, minf=2 00:11:45.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.092 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.092 issued rwts: total=16027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.092 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.092 00:11:45.092 Run status group 0 (all jobs): 00:11:45.092 READ: bw=86.9MiB/s (91.1MB/s), 22.2MiB/s-25.8MiB/s (23.3MB/s-27.0MB/s), io=306MiB (321MB), run=2819-3520msec 00:11:45.092 00:11:45.092 Disk stats (read/write): 00:11:45.092 nvme0n1: ios=20519/0, merge=0/0, ticks=2914/0, in_queue=2914, util=94.80% 00:11:45.092 nvme0n2: ios=19432/0, merge=0/0, ticks=3124/0, in_queue=3124, util=95.39% 00:11:45.092 nvme0n3: ios=18562/0, merge=0/0, ticks=2748/0, in_queue=2748, util=96.47% 00:11:45.092 nvme0n4: ios=15177/0, merge=0/0, ticks=2467/0, in_queue=2467, util=96.03% 00:11:45.092 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:45.092 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:45.350 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:45.350 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:45.608 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:45.608 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:45.867 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:45.867 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:46.125 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:46.125 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77161 00:11:46.125 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:46.125 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.125 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.125 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:46.125 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:46.125 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:46.384 nvmf hotplug test: fio failed as expected 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.384 18:31:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:46.642 rmmod nvme_tcp 00:11:46.642 rmmod nvme_fabrics 00:11:46.642 rmmod nvme_keyring 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 76676 ']' 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 76676 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 76676 ']' 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 76676 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76676 00:11:46.642 killing process with pid 76676 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76676' 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 76676 00:11:46.642 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 76676 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:46.902 00:11:46.902 real 0m18.734s 00:11:46.902 user 1m10.337s 00:11:46.902 sys 0m9.523s 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.902 18:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.902 ************************************ 00:11:46.902 END TEST nvmf_fio_target 00:11:46.902 ************************************ 00:11:46.902 18:31:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:46.902 18:31:09 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:46.902 18:31:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:46.902 18:31:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.902 18:31:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:46.902 ************************************ 00:11:46.902 START TEST nvmf_bdevio 00:11:46.902 ************************************ 00:11:46.902 18:31:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:47.161 * Looking for test storage... 00:11:47.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.161 18:31:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:47.162 Cannot find device "nvmf_tgt_br" 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.162 Cannot find device "nvmf_tgt_br2" 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:47.162 Cannot find device "nvmf_tgt_br" 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:47.162 Cannot find device "nvmf_tgt_br2" 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:47.162 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:47.421 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:47.422 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.422 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.422 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.422 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.422 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.422 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:47.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:11:47.422 00:11:47.422 --- 10.0.0.2 ping statistics --- 00:11:47.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.422 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:11:47.422 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:47.422 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.422 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:11:47.422 00:11:47.422 --- 10.0.0.3 ping statistics --- 00:11:47.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.422 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:47.422 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:11:47.422 00:11:47.422 --- 10.0.0.1 ping statistics --- 00:11:47.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.422 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:47.422 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:47.681 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77526 00:11:47.681 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:47.681 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77526 00:11:47.681 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77526 ']' 00:11:47.681 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.681 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.681 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.681 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.681 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:47.681 [2024-07-15 18:31:10.093197] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:11:47.681 [2024-07-15 18:31:10.093274] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.681 [2024-07-15 18:31:10.237413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.939 [2024-07-15 18:31:10.332754] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.940 [2024-07-15 18:31:10.332795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.940 [2024-07-15 18:31:10.332804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.940 [2024-07-15 18:31:10.332812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.940 [2024-07-15 18:31:10.332819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.940 [2024-07-15 18:31:10.332923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:47.940 [2024-07-15 18:31:10.333201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:47.940 [2024-07-15 18:31:10.333449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:47.940 [2024-07-15 18:31:10.333455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.507 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.507 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:48.507 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:48.507 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.507 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.507 18:31:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.507 18:31:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.507 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.507 18:31:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.507 [2024-07-15 18:31:11.010525] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.507 Malloc0 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:48.507 [2024-07-15 18:31:11.070614] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:48.507 { 00:11:48.507 "params": { 00:11:48.507 "name": "Nvme$subsystem", 00:11:48.507 "trtype": "$TEST_TRANSPORT", 00:11:48.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.507 "adrfam": "ipv4", 00:11:48.507 "trsvcid": "$NVMF_PORT", 00:11:48.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.507 "hdgst": ${hdgst:-false}, 00:11:48.507 "ddgst": ${ddgst:-false} 00:11:48.507 }, 00:11:48.507 "method": "bdev_nvme_attach_controller" 00:11:48.507 } 00:11:48.507 EOF 00:11:48.507 )") 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:48.507 18:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:48.507 "params": { 00:11:48.507 "name": "Nvme1", 00:11:48.507 "trtype": "tcp", 00:11:48.507 "traddr": "10.0.0.2", 00:11:48.507 "adrfam": "ipv4", 00:11:48.507 "trsvcid": "4420", 00:11:48.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.507 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.507 "hdgst": false, 00:11:48.507 "ddgst": false 00:11:48.507 }, 00:11:48.507 "method": "bdev_nvme_attach_controller" 00:11:48.507 }' 00:11:48.813 [2024-07-15 18:31:11.124822] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:11:48.813 [2024-07-15 18:31:11.125044] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77580 ] 00:11:48.813 [2024-07-15 18:31:11.265602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:48.813 [2024-07-15 18:31:11.367132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.813 [2024-07-15 18:31:11.367325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.813 [2024-07-15 18:31:11.367326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.071 I/O targets: 00:11:49.071 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:49.071 00:11:49.071 00:11:49.071 CUnit - A unit testing framework for C - Version 2.1-3 00:11:49.071 http://cunit.sourceforge.net/ 00:11:49.071 00:11:49.071 00:11:49.071 Suite: bdevio tests on: Nvme1n1 00:11:49.071 Test: blockdev write read block ...passed 00:11:49.071 Test: blockdev write zeroes read block ...passed 00:11:49.071 Test: blockdev write zeroes read no split ...passed 00:11:49.071 Test: blockdev write zeroes read split ...passed 00:11:49.071 Test: blockdev write zeroes read split partial ...passed 00:11:49.071 Test: blockdev reset ...[2024-07-15 18:31:11.633140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:49.071 [2024-07-15 18:31:11.633376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa06180 (9): Bad file descriptor 00:11:49.071 [2024-07-15 18:31:11.651873] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:49.071 passed 00:11:49.071 Test: blockdev write read 8 blocks ...passed 00:11:49.071 Test: blockdev write read size > 128k ...passed 00:11:49.071 Test: blockdev write read invalid size ...passed 00:11:49.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.330 Test: blockdev write read max offset ...passed 00:11:49.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.330 Test: blockdev writev readv 8 blocks ...passed 00:11:49.330 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.330 Test: blockdev writev readv block ...passed 00:11:49.330 Test: blockdev writev readv size > 128k ...passed 00:11:49.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.330 Test: blockdev comparev and writev ...[2024-07-15 18:31:11.823178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:49.330 [2024-07-15 18:31:11.823351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:49.330 [2024-07-15 18:31:11.823390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:49.330 [2024-07-15 18:31:11.823400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:49.330 [2024-07-15 18:31:11.823663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:49.330 [2024-07-15 18:31:11.823679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:49.331 [2024-07-15 18:31:11.823693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:49.331 [2024-07-15 18:31:11.823703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:49.331 [2024-07-15 18:31:11.823966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:49.331 [2024-07-15 18:31:11.823981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:49.331 [2024-07-15 18:31:11.823995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:49.331 [2024-07-15 18:31:11.824004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:49.331 [2024-07-15 18:31:11.824227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:49.331 [2024-07-15 18:31:11.824241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:49.331 [2024-07-15 18:31:11.824255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:49.331 [2024-07-15 18:31:11.824263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:49.331 passed 00:11:49.331 Test: blockdev nvme passthru rw ...passed 00:11:49.331 Test: blockdev nvme passthru vendor specific ...[2024-07-15 18:31:11.905920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:49.331 [2024-07-15 18:31:11.905960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:49.331 [2024-07-15 18:31:11.906054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:49.331 [2024-07-15 18:31:11.906073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:49.331 [2024-07-15 18:31:11.906157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:49.331 [2024-07-15 18:31:11.906172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:49.331 [2024-07-15 18:31:11.906266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:49.331 [2024-07-15 18:31:11.906280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:49.331 passed 00:11:49.331 Test: blockdev nvme admin passthru ...passed 00:11:49.589 Test: blockdev copy ...passed 00:11:49.589 00:11:49.589 Run Summary: Type Total Ran Passed Failed Inactive 00:11:49.589 suites 1 1 n/a 0 0 00:11:49.589 tests 23 23 23 0 0 00:11:49.589 asserts 152 152 152 0 n/a 00:11:49.589 00:11:49.589 Elapsed time = 0.886 seconds 00:11:49.589 18:31:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.589 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.589 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.589 18:31:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:49.589 18:31:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:49.589 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.589 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:49.589 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.849 rmmod nvme_tcp 00:11:49.849 rmmod nvme_fabrics 00:11:49.849 rmmod nvme_keyring 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77526 ']' 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77526 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77526 ']' 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77526 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77526 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77526' 00:11:49.849 killing process with pid 77526 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77526 00:11:49.849 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77526 00:11:50.107 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:50.107 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:50.107 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:50.107 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.108 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.108 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.108 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.108 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.108 18:31:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:50.108 00:11:50.108 real 0m3.155s 00:11:50.108 user 0m10.542s 00:11:50.108 sys 0m0.932s 00:11:50.108 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.108 18:31:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:50.108 ************************************ 00:11:50.108 END TEST nvmf_bdevio 00:11:50.108 ************************************ 00:11:50.108 18:31:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:50.108 18:31:12 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:50.108 18:31:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.108 18:31:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.108 18:31:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:50.108 ************************************ 00:11:50.108 START TEST nvmf_auth_target 00:11:50.108 ************************************ 00:11:50.108 18:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:50.367 * Looking for test storage... 00:11:50.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:50.367 18:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:50.368 Cannot find device "nvmf_tgt_br" 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:50.368 Cannot find device "nvmf_tgt_br2" 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:50.368 Cannot find device "nvmf_tgt_br" 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:50.368 Cannot find device "nvmf_tgt_br2" 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:50.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:50.368 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:50.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:50.627 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:50.627 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:50.627 18:31:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:50.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:50.627 00:11:50.627 --- 10.0.0.2 ping statistics --- 00:11:50.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.627 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:50.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:50.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:11:50.627 00:11:50.627 --- 10.0.0.3 ping statistics --- 00:11:50.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.627 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:50.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:11:50.627 00:11:50.627 --- 10.0.0.1 ping statistics --- 00:11:50.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.627 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:50.627 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77762 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77762 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77762 ']' 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.884 18:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77810 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=97e6029ae9e9b60652271cc3995a2351df870c5bb5f2deec 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pSQ 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 97e6029ae9e9b60652271cc3995a2351df870c5bb5f2deec 0 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 97e6029ae9e9b60652271cc3995a2351df870c5bb5f2deec 0 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=97e6029ae9e9b60652271cc3995a2351df870c5bb5f2deec 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pSQ 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pSQ 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.pSQ 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bf7c641b55b553e3ed6f709bd039515ac04e658cf1a1876ca9fbc86f173f7fd6 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GcK 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bf7c641b55b553e3ed6f709bd039515ac04e658cf1a1876ca9fbc86f173f7fd6 3 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bf7c641b55b553e3ed6f709bd039515ac04e658cf1a1876ca9fbc86f173f7fd6 3 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bf7c641b55b553e3ed6f709bd039515ac04e658cf1a1876ca9fbc86f173f7fd6 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GcK 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GcK 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.GcK 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f9d4b01479729b71c485db7820c32bf6 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1Uk 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f9d4b01479729b71c485db7820c32bf6 1 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f9d4b01479729b71c485db7820c32bf6 1 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f9d4b01479729b71c485db7820c32bf6 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1Uk 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1Uk 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.1Uk 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:51.821 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=da23fe9876cb094d7a4ea21703d99d2c14da5b57f955b992 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eoz 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key da23fe9876cb094d7a4ea21703d99d2c14da5b57f955b992 2 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 da23fe9876cb094d7a4ea21703d99d2c14da5b57f955b992 2 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=da23fe9876cb094d7a4ea21703d99d2c14da5b57f955b992 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eoz 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eoz 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.eoz 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=68512cde9643e7d9e091dc6d073e8128a3214ae44754f397 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Sfg 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 68512cde9643e7d9e091dc6d073e8128a3214ae44754f397 2 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 68512cde9643e7d9e091dc6d073e8128a3214ae44754f397 2 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=68512cde9643e7d9e091dc6d073e8128a3214ae44754f397 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Sfg 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Sfg 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Sfg 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=418187693f1d2179b3ff7e4a280a68c1 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dZa 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 418187693f1d2179b3ff7e4a280a68c1 1 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 418187693f1d2179b3ff7e4a280a68c1 1 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=418187693f1d2179b3ff7e4a280a68c1 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dZa 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dZa 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.dZa 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=de17c356dcd2605ce811683cabc4c350e23acc09fb601296fd3296a4f1fa33c6 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PGA 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key de17c356dcd2605ce811683cabc4c350e23acc09fb601296fd3296a4f1fa33c6 3 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 de17c356dcd2605ce811683cabc4c350e23acc09fb601296fd3296a4f1fa33c6 3 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=de17c356dcd2605ce811683cabc4c350e23acc09fb601296fd3296a4f1fa33c6 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:52.081 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PGA 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PGA 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.PGA 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77762 00:11:52.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77762 ']' 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.340 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:52.598 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.598 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:52.598 18:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77810 /var/tmp/host.sock 00:11:52.598 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77810 ']' 00:11:52.598 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:52.598 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.598 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:52.598 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.598 18:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pSQ 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pSQ 00:11:52.598 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pSQ 00:11:52.857 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.GcK ]] 00:11:52.857 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GcK 00:11:52.857 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.857 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.857 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.857 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GcK 00:11:52.857 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GcK 00:11:53.115 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:53.115 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.1Uk 00:11:53.115 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.115 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.115 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.116 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.1Uk 00:11:53.116 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.1Uk 00:11:53.374 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.eoz ]] 00:11:53.374 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eoz 00:11:53.374 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.374 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.374 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.374 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eoz 00:11:53.374 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eoz 00:11:53.633 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:53.633 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Sfg 00:11:53.633 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.633 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.633 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.633 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Sfg 00:11:53.633 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Sfg 00:11:53.891 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.dZa ]] 00:11:53.891 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dZa 00:11:53.891 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.891 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.891 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.891 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dZa 00:11:53.891 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dZa 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PGA 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.PGA 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.PGA 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:54.150 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.409 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.667 00:11:54.667 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.667 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.667 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.925 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.925 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.925 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.925 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.925 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.925 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:54.925 { 00:11:54.925 "auth": { 00:11:54.925 "dhgroup": "null", 00:11:54.925 "digest": "sha256", 00:11:54.925 "state": "completed" 00:11:54.925 }, 00:11:54.926 "cntlid": 1, 00:11:54.926 "listen_address": { 00:11:54.926 "adrfam": "IPv4", 00:11:54.926 "traddr": "10.0.0.2", 00:11:54.926 "trsvcid": "4420", 00:11:54.926 "trtype": "TCP" 00:11:54.926 }, 00:11:54.926 "peer_address": { 00:11:54.926 "adrfam": "IPv4", 00:11:54.926 "traddr": "10.0.0.1", 00:11:54.926 "trsvcid": "51818", 00:11:54.926 "trtype": "TCP" 00:11:54.926 }, 00:11:54.926 "qid": 0, 00:11:54.926 "state": "enabled", 00:11:54.926 "thread": "nvmf_tgt_poll_group_000" 00:11:54.926 } 00:11:54.926 ]' 00:11:54.926 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:54.926 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.926 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.184 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:55.184 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.184 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.184 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.184 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.184 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.400 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.401 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.401 { 00:11:59.401 "auth": { 00:11:59.401 "dhgroup": "null", 00:11:59.401 "digest": "sha256", 00:11:59.401 "state": "completed" 00:11:59.401 }, 00:11:59.401 "cntlid": 3, 00:11:59.401 "listen_address": { 00:11:59.401 "adrfam": "IPv4", 00:11:59.401 "traddr": "10.0.0.2", 00:11:59.401 "trsvcid": "4420", 00:11:59.401 "trtype": "TCP" 00:11:59.401 }, 00:11:59.401 "peer_address": { 00:11:59.401 "adrfam": "IPv4", 00:11:59.401 "traddr": "10.0.0.1", 00:11:59.401 "trsvcid": "51836", 00:11:59.401 "trtype": "TCP" 00:11:59.401 }, 00:11:59.401 "qid": 0, 00:11:59.401 "state": "enabled", 00:11:59.401 "thread": "nvmf_tgt_poll_group_000" 00:11:59.401 } 00:11:59.401 ]' 00:11:59.401 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.659 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.659 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.659 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:59.659 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.659 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.659 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.659 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.918 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:12:00.509 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.509 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:00.509 18:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.509 18:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.509 18:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.509 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.509 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:00.509 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.770 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.033 00:12:01.033 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.033 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.033 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.033 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.033 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.033 18:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.033 18:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.291 18:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.291 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.291 { 00:12:01.291 "auth": { 00:12:01.291 "dhgroup": "null", 00:12:01.291 "digest": "sha256", 00:12:01.291 "state": "completed" 00:12:01.291 }, 00:12:01.291 "cntlid": 5, 00:12:01.291 "listen_address": { 00:12:01.291 "adrfam": "IPv4", 00:12:01.291 "traddr": "10.0.0.2", 00:12:01.291 "trsvcid": "4420", 00:12:01.291 "trtype": "TCP" 00:12:01.291 }, 00:12:01.291 "peer_address": { 00:12:01.291 "adrfam": "IPv4", 00:12:01.291 "traddr": "10.0.0.1", 00:12:01.291 "trsvcid": "51856", 00:12:01.291 "trtype": "TCP" 00:12:01.291 }, 00:12:01.291 "qid": 0, 00:12:01.291 "state": "enabled", 00:12:01.291 "thread": "nvmf_tgt_poll_group_000" 00:12:01.291 } 00:12:01.291 ]' 00:12:01.291 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.291 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.291 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.291 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:01.291 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.291 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.291 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.291 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.557 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:12:02.121 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.121 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:02.121 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.121 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.121 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.121 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.121 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:02.121 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.379 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.637 00:12:02.637 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.637 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.637 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.895 { 00:12:02.895 "auth": { 00:12:02.895 "dhgroup": "null", 00:12:02.895 "digest": "sha256", 00:12:02.895 "state": "completed" 00:12:02.895 }, 00:12:02.895 "cntlid": 7, 00:12:02.895 "listen_address": { 00:12:02.895 "adrfam": "IPv4", 00:12:02.895 "traddr": "10.0.0.2", 00:12:02.895 "trsvcid": "4420", 00:12:02.895 "trtype": "TCP" 00:12:02.895 }, 00:12:02.895 "peer_address": { 00:12:02.895 "adrfam": "IPv4", 00:12:02.895 "traddr": "10.0.0.1", 00:12:02.895 "trsvcid": "44706", 00:12:02.895 "trtype": "TCP" 00:12:02.895 }, 00:12:02.895 "qid": 0, 00:12:02.895 "state": "enabled", 00:12:02.895 "thread": "nvmf_tgt_poll_group_000" 00:12:02.895 } 00:12:02.895 ]' 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.895 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.153 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:12:03.716 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.716 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:03.716 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.716 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.716 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.716 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.716 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.716 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:03.716 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:03.973 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.974 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.231 00:12:04.231 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.231 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.231 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.489 { 00:12:04.489 "auth": { 00:12:04.489 "dhgroup": "ffdhe2048", 00:12:04.489 "digest": "sha256", 00:12:04.489 "state": "completed" 00:12:04.489 }, 00:12:04.489 "cntlid": 9, 00:12:04.489 "listen_address": { 00:12:04.489 "adrfam": "IPv4", 00:12:04.489 "traddr": "10.0.0.2", 00:12:04.489 "trsvcid": "4420", 00:12:04.489 "trtype": "TCP" 00:12:04.489 }, 00:12:04.489 "peer_address": { 00:12:04.489 "adrfam": "IPv4", 00:12:04.489 "traddr": "10.0.0.1", 00:12:04.489 "trsvcid": "44726", 00:12:04.489 "trtype": "TCP" 00:12:04.489 }, 00:12:04.489 "qid": 0, 00:12:04.489 "state": "enabled", 00:12:04.489 "thread": "nvmf_tgt_poll_group_000" 00:12:04.489 } 00:12:04.489 ]' 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:04.489 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.489 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.489 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.489 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.747 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:12:05.312 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.312 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:05.312 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.312 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.312 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.312 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.312 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:05.312 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.569 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.827 00:12:05.827 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.827 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.827 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.085 { 00:12:06.085 "auth": { 00:12:06.085 "dhgroup": "ffdhe2048", 00:12:06.085 "digest": "sha256", 00:12:06.085 "state": "completed" 00:12:06.085 }, 00:12:06.085 "cntlid": 11, 00:12:06.085 "listen_address": { 00:12:06.085 "adrfam": "IPv4", 00:12:06.085 "traddr": "10.0.0.2", 00:12:06.085 "trsvcid": "4420", 00:12:06.085 "trtype": "TCP" 00:12:06.085 }, 00:12:06.085 "peer_address": { 00:12:06.085 "adrfam": "IPv4", 00:12:06.085 "traddr": "10.0.0.1", 00:12:06.085 "trsvcid": "44772", 00:12:06.085 "trtype": "TCP" 00:12:06.085 }, 00:12:06.085 "qid": 0, 00:12:06.085 "state": "enabled", 00:12:06.085 "thread": "nvmf_tgt_poll_group_000" 00:12:06.085 } 00:12:06.085 ]' 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.085 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.342 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:12:06.906 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.906 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:06.906 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.906 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.906 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.906 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.906 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:06.906 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.163 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.422 00:12:07.422 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.422 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.422 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.680 { 00:12:07.680 "auth": { 00:12:07.680 "dhgroup": "ffdhe2048", 00:12:07.680 "digest": "sha256", 00:12:07.680 "state": "completed" 00:12:07.680 }, 00:12:07.680 "cntlid": 13, 00:12:07.680 "listen_address": { 00:12:07.680 "adrfam": "IPv4", 00:12:07.680 "traddr": "10.0.0.2", 00:12:07.680 "trsvcid": "4420", 00:12:07.680 "trtype": "TCP" 00:12:07.680 }, 00:12:07.680 "peer_address": { 00:12:07.680 "adrfam": "IPv4", 00:12:07.680 "traddr": "10.0.0.1", 00:12:07.680 "trsvcid": "44794", 00:12:07.680 "trtype": "TCP" 00:12:07.680 }, 00:12:07.680 "qid": 0, 00:12:07.680 "state": "enabled", 00:12:07.680 "thread": "nvmf_tgt_poll_group_000" 00:12:07.680 } 00:12:07.680 ]' 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:07.680 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.938 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.938 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.938 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.938 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:12:08.504 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.504 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:08.504 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.504 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.504 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.504 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.504 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:08.504 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.763 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:09.021 00:12:09.021 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.021 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.021 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.280 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.280 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.280 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.280 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.280 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.280 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.280 { 00:12:09.280 "auth": { 00:12:09.280 "dhgroup": "ffdhe2048", 00:12:09.280 "digest": "sha256", 00:12:09.280 "state": "completed" 00:12:09.280 }, 00:12:09.280 "cntlid": 15, 00:12:09.280 "listen_address": { 00:12:09.280 "adrfam": "IPv4", 00:12:09.280 "traddr": "10.0.0.2", 00:12:09.280 "trsvcid": "4420", 00:12:09.280 "trtype": "TCP" 00:12:09.280 }, 00:12:09.280 "peer_address": { 00:12:09.280 "adrfam": "IPv4", 00:12:09.280 "traddr": "10.0.0.1", 00:12:09.280 "trsvcid": "44810", 00:12:09.280 "trtype": "TCP" 00:12:09.280 }, 00:12:09.280 "qid": 0, 00:12:09.280 "state": "enabled", 00:12:09.280 "thread": "nvmf_tgt_poll_group_000" 00:12:09.280 } 00:12:09.280 ]' 00:12:09.280 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.280 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:09.280 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.538 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:09.538 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.538 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.538 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.538 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.796 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.362 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.626 00:12:10.884 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.884 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.884 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.884 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.884 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.884 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.884 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.884 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.884 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.884 { 00:12:10.884 "auth": { 00:12:10.884 "dhgroup": "ffdhe3072", 00:12:10.884 "digest": "sha256", 00:12:10.884 "state": "completed" 00:12:10.885 }, 00:12:10.885 "cntlid": 17, 00:12:10.885 "listen_address": { 00:12:10.885 "adrfam": "IPv4", 00:12:10.885 "traddr": "10.0.0.2", 00:12:10.885 "trsvcid": "4420", 00:12:10.885 "trtype": "TCP" 00:12:10.885 }, 00:12:10.885 "peer_address": { 00:12:10.885 "adrfam": "IPv4", 00:12:10.885 "traddr": "10.0.0.1", 00:12:10.885 "trsvcid": "44822", 00:12:10.885 "trtype": "TCP" 00:12:10.885 }, 00:12:10.885 "qid": 0, 00:12:10.885 "state": "enabled", 00:12:10.885 "thread": "nvmf_tgt_poll_group_000" 00:12:10.885 } 00:12:10.885 ]' 00:12:10.885 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.142 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:11.142 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.142 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:11.142 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.142 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.142 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.142 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.399 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.964 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:12.222 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:12.222 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:12.222 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.222 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.222 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.222 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.222 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.222 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.222 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.478 00:12:12.478 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.478 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.478 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.478 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.478 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.478 18:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.478 18:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.735 18:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.735 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.735 { 00:12:12.735 "auth": { 00:12:12.735 "dhgroup": "ffdhe3072", 00:12:12.735 "digest": "sha256", 00:12:12.735 "state": "completed" 00:12:12.735 }, 00:12:12.735 "cntlid": 19, 00:12:12.735 "listen_address": { 00:12:12.735 "adrfam": "IPv4", 00:12:12.735 "traddr": "10.0.0.2", 00:12:12.735 "trsvcid": "4420", 00:12:12.735 "trtype": "TCP" 00:12:12.735 }, 00:12:12.735 "peer_address": { 00:12:12.735 "adrfam": "IPv4", 00:12:12.735 "traddr": "10.0.0.1", 00:12:12.736 "trsvcid": "51082", 00:12:12.736 "trtype": "TCP" 00:12:12.736 }, 00:12:12.736 "qid": 0, 00:12:12.736 "state": "enabled", 00:12:12.736 "thread": "nvmf_tgt_poll_group_000" 00:12:12.736 } 00:12:12.736 ]' 00:12:12.736 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.736 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:12.736 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.736 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:12.736 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.736 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.736 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.736 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.992 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:12:13.557 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.557 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:13.557 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.557 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.557 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.557 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.557 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:13.557 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.814 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.070 00:12:14.070 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.070 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.071 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.328 { 00:12:14.328 "auth": { 00:12:14.328 "dhgroup": "ffdhe3072", 00:12:14.328 "digest": "sha256", 00:12:14.328 "state": "completed" 00:12:14.328 }, 00:12:14.328 "cntlid": 21, 00:12:14.328 "listen_address": { 00:12:14.328 "adrfam": "IPv4", 00:12:14.328 "traddr": "10.0.0.2", 00:12:14.328 "trsvcid": "4420", 00:12:14.328 "trtype": "TCP" 00:12:14.328 }, 00:12:14.328 "peer_address": { 00:12:14.328 "adrfam": "IPv4", 00:12:14.328 "traddr": "10.0.0.1", 00:12:14.328 "trsvcid": "51112", 00:12:14.328 "trtype": "TCP" 00:12:14.328 }, 00:12:14.328 "qid": 0, 00:12:14.328 "state": "enabled", 00:12:14.328 "thread": "nvmf_tgt_poll_group_000" 00:12:14.328 } 00:12:14.328 ]' 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:14.328 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.585 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.585 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.585 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.585 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:12:15.152 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.152 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:15.152 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.152 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.152 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.152 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.152 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:15.152 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:15.419 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:15.419 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.419 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:15.419 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:15.420 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:15.420 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.420 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:12:15.420 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.420 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.420 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.420 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:15.420 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:15.676 00:12:15.676 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.676 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.676 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.933 { 00:12:15.933 "auth": { 00:12:15.933 "dhgroup": "ffdhe3072", 00:12:15.933 "digest": "sha256", 00:12:15.933 "state": "completed" 00:12:15.933 }, 00:12:15.933 "cntlid": 23, 00:12:15.933 "listen_address": { 00:12:15.933 "adrfam": "IPv4", 00:12:15.933 "traddr": "10.0.0.2", 00:12:15.933 "trsvcid": "4420", 00:12:15.933 "trtype": "TCP" 00:12:15.933 }, 00:12:15.933 "peer_address": { 00:12:15.933 "adrfam": "IPv4", 00:12:15.933 "traddr": "10.0.0.1", 00:12:15.933 "trsvcid": "51144", 00:12:15.933 "trtype": "TCP" 00:12:15.933 }, 00:12:15.933 "qid": 0, 00:12:15.933 "state": "enabled", 00:12:15.933 "thread": "nvmf_tgt_poll_group_000" 00:12:15.933 } 00:12:15.933 ]' 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:15.933 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.189 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.189 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.189 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.189 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:12:16.755 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.755 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:16.755 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.755 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.755 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.755 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.755 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.755 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:16.755 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.012 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.270 00:12:17.270 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.270 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.270 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.527 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.527 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.527 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.527 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.527 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.527 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.527 { 00:12:17.527 "auth": { 00:12:17.527 "dhgroup": "ffdhe4096", 00:12:17.527 "digest": "sha256", 00:12:17.527 "state": "completed" 00:12:17.527 }, 00:12:17.527 "cntlid": 25, 00:12:17.527 "listen_address": { 00:12:17.527 "adrfam": "IPv4", 00:12:17.527 "traddr": "10.0.0.2", 00:12:17.527 "trsvcid": "4420", 00:12:17.527 "trtype": "TCP" 00:12:17.527 }, 00:12:17.527 "peer_address": { 00:12:17.527 "adrfam": "IPv4", 00:12:17.527 "traddr": "10.0.0.1", 00:12:17.527 "trsvcid": "51176", 00:12:17.527 "trtype": "TCP" 00:12:17.527 }, 00:12:17.527 "qid": 0, 00:12:17.527 "state": "enabled", 00:12:17.527 "thread": "nvmf_tgt_poll_group_000" 00:12:17.527 } 00:12:17.527 ]' 00:12:17.527 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.527 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:17.527 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.784 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:17.784 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.784 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.784 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.784 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.785 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:12:18.353 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.353 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:18.353 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.353 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.677 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.677 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.677 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:18.677 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.677 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.934 00:12:18.934 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.934 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.934 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.193 { 00:12:19.193 "auth": { 00:12:19.193 "dhgroup": "ffdhe4096", 00:12:19.193 "digest": "sha256", 00:12:19.193 "state": "completed" 00:12:19.193 }, 00:12:19.193 "cntlid": 27, 00:12:19.193 "listen_address": { 00:12:19.193 "adrfam": "IPv4", 00:12:19.193 "traddr": "10.0.0.2", 00:12:19.193 "trsvcid": "4420", 00:12:19.193 "trtype": "TCP" 00:12:19.193 }, 00:12:19.193 "peer_address": { 00:12:19.193 "adrfam": "IPv4", 00:12:19.193 "traddr": "10.0.0.1", 00:12:19.193 "trsvcid": "51214", 00:12:19.193 "trtype": "TCP" 00:12:19.193 }, 00:12:19.193 "qid": 0, 00:12:19.193 "state": "enabled", 00:12:19.193 "thread": "nvmf_tgt_poll_group_000" 00:12:19.193 } 00:12:19.193 ]' 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:19.193 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.451 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.451 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.451 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.451 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:12:20.017 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.017 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:20.017 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.017 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.017 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.017 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.017 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:20.017 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.275 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.533 00:12:20.533 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.533 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.533 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.791 { 00:12:20.791 "auth": { 00:12:20.791 "dhgroup": "ffdhe4096", 00:12:20.791 "digest": "sha256", 00:12:20.791 "state": "completed" 00:12:20.791 }, 00:12:20.791 "cntlid": 29, 00:12:20.791 "listen_address": { 00:12:20.791 "adrfam": "IPv4", 00:12:20.791 "traddr": "10.0.0.2", 00:12:20.791 "trsvcid": "4420", 00:12:20.791 "trtype": "TCP" 00:12:20.791 }, 00:12:20.791 "peer_address": { 00:12:20.791 "adrfam": "IPv4", 00:12:20.791 "traddr": "10.0.0.1", 00:12:20.791 "trsvcid": "51236", 00:12:20.791 "trtype": "TCP" 00:12:20.791 }, 00:12:20.791 "qid": 0, 00:12:20.791 "state": "enabled", 00:12:20.791 "thread": "nvmf_tgt_poll_group_000" 00:12:20.791 } 00:12:20.791 ]' 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:20.791 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.049 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.049 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.049 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.049 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:12:21.637 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.637 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:21.637 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.637 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.637 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.637 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.637 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:21.637 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:21.896 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:22.155 00:12:22.413 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.413 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.413 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.413 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.413 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.413 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.413 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.413 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.413 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.413 { 00:12:22.413 "auth": { 00:12:22.413 "dhgroup": "ffdhe4096", 00:12:22.413 "digest": "sha256", 00:12:22.413 "state": "completed" 00:12:22.413 }, 00:12:22.413 "cntlid": 31, 00:12:22.413 "listen_address": { 00:12:22.413 "adrfam": "IPv4", 00:12:22.413 "traddr": "10.0.0.2", 00:12:22.413 "trsvcid": "4420", 00:12:22.413 "trtype": "TCP" 00:12:22.413 }, 00:12:22.413 "peer_address": { 00:12:22.413 "adrfam": "IPv4", 00:12:22.413 "traddr": "10.0.0.1", 00:12:22.413 "trsvcid": "38156", 00:12:22.413 "trtype": "TCP" 00:12:22.413 }, 00:12:22.413 "qid": 0, 00:12:22.413 "state": "enabled", 00:12:22.413 "thread": "nvmf_tgt_poll_group_000" 00:12:22.413 } 00:12:22.413 ]' 00:12:22.413 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.672 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:22.672 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.672 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:22.672 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.672 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.672 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.672 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.930 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:12:23.497 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.497 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:23.497 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.497 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.497 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.497 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.497 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.497 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:23.497 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.497 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.065 00:12:24.065 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.065 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.065 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.065 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.065 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.065 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.065 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.323 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.323 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.323 { 00:12:24.323 "auth": { 00:12:24.323 "dhgroup": "ffdhe6144", 00:12:24.323 "digest": "sha256", 00:12:24.323 "state": "completed" 00:12:24.323 }, 00:12:24.323 "cntlid": 33, 00:12:24.323 "listen_address": { 00:12:24.323 "adrfam": "IPv4", 00:12:24.323 "traddr": "10.0.0.2", 00:12:24.323 "trsvcid": "4420", 00:12:24.323 "trtype": "TCP" 00:12:24.323 }, 00:12:24.323 "peer_address": { 00:12:24.323 "adrfam": "IPv4", 00:12:24.323 "traddr": "10.0.0.1", 00:12:24.323 "trsvcid": "38184", 00:12:24.323 "trtype": "TCP" 00:12:24.323 }, 00:12:24.323 "qid": 0, 00:12:24.323 "state": "enabled", 00:12:24.323 "thread": "nvmf_tgt_poll_group_000" 00:12:24.323 } 00:12:24.323 ]' 00:12:24.324 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.324 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.324 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.324 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:24.324 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.324 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.324 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.324 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.582 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.148 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.406 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.406 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.406 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.664 00:12:25.664 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.664 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.664 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.943 { 00:12:25.943 "auth": { 00:12:25.943 "dhgroup": "ffdhe6144", 00:12:25.943 "digest": "sha256", 00:12:25.943 "state": "completed" 00:12:25.943 }, 00:12:25.943 "cntlid": 35, 00:12:25.943 "listen_address": { 00:12:25.943 "adrfam": "IPv4", 00:12:25.943 "traddr": "10.0.0.2", 00:12:25.943 "trsvcid": "4420", 00:12:25.943 "trtype": "TCP" 00:12:25.943 }, 00:12:25.943 "peer_address": { 00:12:25.943 "adrfam": "IPv4", 00:12:25.943 "traddr": "10.0.0.1", 00:12:25.943 "trsvcid": "38212", 00:12:25.943 "trtype": "TCP" 00:12:25.943 }, 00:12:25.943 "qid": 0, 00:12:25.943 "state": "enabled", 00:12:25.943 "thread": "nvmf_tgt_poll_group_000" 00:12:25.943 } 00:12:25.943 ]' 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.943 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.201 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:12:26.767 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.767 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:26.767 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.767 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.767 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.767 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.767 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:26.767 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.025 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.308 00:12:27.308 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.308 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.308 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.567 { 00:12:27.567 "auth": { 00:12:27.567 "dhgroup": "ffdhe6144", 00:12:27.567 "digest": "sha256", 00:12:27.567 "state": "completed" 00:12:27.567 }, 00:12:27.567 "cntlid": 37, 00:12:27.567 "listen_address": { 00:12:27.567 "adrfam": "IPv4", 00:12:27.567 "traddr": "10.0.0.2", 00:12:27.567 "trsvcid": "4420", 00:12:27.567 "trtype": "TCP" 00:12:27.567 }, 00:12:27.567 "peer_address": { 00:12:27.567 "adrfam": "IPv4", 00:12:27.567 "traddr": "10.0.0.1", 00:12:27.567 "trsvcid": "38246", 00:12:27.567 "trtype": "TCP" 00:12:27.567 }, 00:12:27.567 "qid": 0, 00:12:27.567 "state": "enabled", 00:12:27.567 "thread": "nvmf_tgt_poll_group_000" 00:12:27.567 } 00:12:27.567 ]' 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:27.567 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.826 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.826 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.826 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.826 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:12:28.392 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.392 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:28.392 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.392 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.392 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.392 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.392 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:28.392 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:28.649 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:12:28.649 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.649 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:28.649 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:28.649 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:28.649 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.650 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:12:28.650 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.650 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.650 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.650 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:28.650 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:29.215 00:12:29.215 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.215 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.215 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.215 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.215 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.215 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.215 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.215 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.215 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.215 { 00:12:29.215 "auth": { 00:12:29.215 "dhgroup": "ffdhe6144", 00:12:29.215 "digest": "sha256", 00:12:29.215 "state": "completed" 00:12:29.215 }, 00:12:29.215 "cntlid": 39, 00:12:29.215 "listen_address": { 00:12:29.215 "adrfam": "IPv4", 00:12:29.215 "traddr": "10.0.0.2", 00:12:29.215 "trsvcid": "4420", 00:12:29.215 "trtype": "TCP" 00:12:29.215 }, 00:12:29.215 "peer_address": { 00:12:29.215 "adrfam": "IPv4", 00:12:29.215 "traddr": "10.0.0.1", 00:12:29.215 "trsvcid": "38278", 00:12:29.215 "trtype": "TCP" 00:12:29.215 }, 00:12:29.215 "qid": 0, 00:12:29.215 "state": "enabled", 00:12:29.215 "thread": "nvmf_tgt_poll_group_000" 00:12:29.215 } 00:12:29.215 ]' 00:12:29.215 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.473 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:29.473 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.473 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:29.473 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.473 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.473 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.473 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.731 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:12:30.297 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.297 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:30.297 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.297 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.297 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.297 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.297 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.297 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:30.297 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.555 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.142 00:12:31.142 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.142 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.142 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.142 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.142 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.142 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.142 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.142 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.142 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.142 { 00:12:31.142 "auth": { 00:12:31.142 "dhgroup": "ffdhe8192", 00:12:31.142 "digest": "sha256", 00:12:31.142 "state": "completed" 00:12:31.142 }, 00:12:31.142 "cntlid": 41, 00:12:31.142 "listen_address": { 00:12:31.142 "adrfam": "IPv4", 00:12:31.142 "traddr": "10.0.0.2", 00:12:31.142 "trsvcid": "4420", 00:12:31.142 "trtype": "TCP" 00:12:31.142 }, 00:12:31.142 "peer_address": { 00:12:31.142 "adrfam": "IPv4", 00:12:31.142 "traddr": "10.0.0.1", 00:12:31.142 "trsvcid": "38312", 00:12:31.142 "trtype": "TCP" 00:12:31.142 }, 00:12:31.142 "qid": 0, 00:12:31.142 "state": "enabled", 00:12:31.142 "thread": "nvmf_tgt_poll_group_000" 00:12:31.142 } 00:12:31.142 ]' 00:12:31.142 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.401 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:31.401 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.401 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:31.401 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.401 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.401 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.401 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.658 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:12:32.226 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.226 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:32.226 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.226 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.226 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.227 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:32.227 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.484 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.050 00:12:33.050 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.050 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.050 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.308 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.308 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.308 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.308 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.308 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.308 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.308 { 00:12:33.308 "auth": { 00:12:33.308 "dhgroup": "ffdhe8192", 00:12:33.308 "digest": "sha256", 00:12:33.308 "state": "completed" 00:12:33.308 }, 00:12:33.308 "cntlid": 43, 00:12:33.308 "listen_address": { 00:12:33.308 "adrfam": "IPv4", 00:12:33.308 "traddr": "10.0.0.2", 00:12:33.308 "trsvcid": "4420", 00:12:33.308 "trtype": "TCP" 00:12:33.308 }, 00:12:33.308 "peer_address": { 00:12:33.308 "adrfam": "IPv4", 00:12:33.308 "traddr": "10.0.0.1", 00:12:33.308 "trsvcid": "38950", 00:12:33.308 "trtype": "TCP" 00:12:33.308 }, 00:12:33.308 "qid": 0, 00:12:33.308 "state": "enabled", 00:12:33.308 "thread": "nvmf_tgt_poll_group_000" 00:12:33.308 } 00:12:33.308 ]' 00:12:33.308 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.308 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.308 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.309 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.309 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.309 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.309 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.309 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.566 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:12:34.131 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.131 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:34.131 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.131 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.131 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.131 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.131 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:34.131 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.389 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.955 00:12:34.955 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.955 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.955 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.212 { 00:12:35.212 "auth": { 00:12:35.212 "dhgroup": "ffdhe8192", 00:12:35.212 "digest": "sha256", 00:12:35.212 "state": "completed" 00:12:35.212 }, 00:12:35.212 "cntlid": 45, 00:12:35.212 "listen_address": { 00:12:35.212 "adrfam": "IPv4", 00:12:35.212 "traddr": "10.0.0.2", 00:12:35.212 "trsvcid": "4420", 00:12:35.212 "trtype": "TCP" 00:12:35.212 }, 00:12:35.212 "peer_address": { 00:12:35.212 "adrfam": "IPv4", 00:12:35.212 "traddr": "10.0.0.1", 00:12:35.212 "trsvcid": "38974", 00:12:35.212 "trtype": "TCP" 00:12:35.212 }, 00:12:35.212 "qid": 0, 00:12:35.212 "state": "enabled", 00:12:35.212 "thread": "nvmf_tgt_poll_group_000" 00:12:35.212 } 00:12:35.212 ]' 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.212 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.470 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:12:36.034 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:36.292 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:36.859 00:12:36.859 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.859 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.859 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.117 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.117 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.117 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.117 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.117 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.117 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.117 { 00:12:37.117 "auth": { 00:12:37.117 "dhgroup": "ffdhe8192", 00:12:37.117 "digest": "sha256", 00:12:37.117 "state": "completed" 00:12:37.117 }, 00:12:37.117 "cntlid": 47, 00:12:37.117 "listen_address": { 00:12:37.117 "adrfam": "IPv4", 00:12:37.117 "traddr": "10.0.0.2", 00:12:37.117 "trsvcid": "4420", 00:12:37.117 "trtype": "TCP" 00:12:37.117 }, 00:12:37.117 "peer_address": { 00:12:37.117 "adrfam": "IPv4", 00:12:37.117 "traddr": "10.0.0.1", 00:12:37.117 "trsvcid": "38996", 00:12:37.117 "trtype": "TCP" 00:12:37.117 }, 00:12:37.117 "qid": 0, 00:12:37.117 "state": "enabled", 00:12:37.117 "thread": "nvmf_tgt_poll_group_000" 00:12:37.117 } 00:12:37.117 ]' 00:12:37.117 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.117 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.117 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.375 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:37.375 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.375 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.375 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.375 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.633 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:12:38.200 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.200 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:38.200 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.200 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.200 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.200 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:38.200 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:38.200 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.201 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.501 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.501 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.501 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.501 00:12:38.501 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.501 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.501 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.768 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.768 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.768 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.768 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.768 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.768 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.768 { 00:12:38.768 "auth": { 00:12:38.768 "dhgroup": "null", 00:12:38.768 "digest": "sha384", 00:12:38.768 "state": "completed" 00:12:38.768 }, 00:12:38.768 "cntlid": 49, 00:12:38.768 "listen_address": { 00:12:38.768 "adrfam": "IPv4", 00:12:38.768 "traddr": "10.0.0.2", 00:12:38.768 "trsvcid": "4420", 00:12:38.768 "trtype": "TCP" 00:12:38.768 }, 00:12:38.768 "peer_address": { 00:12:38.768 "adrfam": "IPv4", 00:12:38.768 "traddr": "10.0.0.1", 00:12:38.768 "trsvcid": "39016", 00:12:38.768 "trtype": "TCP" 00:12:38.768 }, 00:12:38.768 "qid": 0, 00:12:38.768 "state": "enabled", 00:12:38.768 "thread": "nvmf_tgt_poll_group_000" 00:12:38.768 } 00:12:38.768 ]' 00:12:38.768 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.768 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.768 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.027 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:39.027 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.027 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.027 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.027 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.286 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:12:39.853 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.853 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:39.853 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.853 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.853 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.853 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.853 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:39.853 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:40.112 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:12:40.112 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.112 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:40.112 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:40.112 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:40.113 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.113 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.113 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.113 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.113 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.113 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.113 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.372 00:12:40.372 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.372 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.372 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.632 { 00:12:40.632 "auth": { 00:12:40.632 "dhgroup": "null", 00:12:40.632 "digest": "sha384", 00:12:40.632 "state": "completed" 00:12:40.632 }, 00:12:40.632 "cntlid": 51, 00:12:40.632 "listen_address": { 00:12:40.632 "adrfam": "IPv4", 00:12:40.632 "traddr": "10.0.0.2", 00:12:40.632 "trsvcid": "4420", 00:12:40.632 "trtype": "TCP" 00:12:40.632 }, 00:12:40.632 "peer_address": { 00:12:40.632 "adrfam": "IPv4", 00:12:40.632 "traddr": "10.0.0.1", 00:12:40.632 "trsvcid": "39044", 00:12:40.632 "trtype": "TCP" 00:12:40.632 }, 00:12:40.632 "qid": 0, 00:12:40.632 "state": "enabled", 00:12:40.632 "thread": "nvmf_tgt_poll_group_000" 00:12:40.632 } 00:12:40.632 ]' 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.632 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.890 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:12:41.457 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.457 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:41.457 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.457 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.457 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.457 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.457 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:41.457 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.717 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.979 00:12:41.979 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.979 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.979 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.239 { 00:12:42.239 "auth": { 00:12:42.239 "dhgroup": "null", 00:12:42.239 "digest": "sha384", 00:12:42.239 "state": "completed" 00:12:42.239 }, 00:12:42.239 "cntlid": 53, 00:12:42.239 "listen_address": { 00:12:42.239 "adrfam": "IPv4", 00:12:42.239 "traddr": "10.0.0.2", 00:12:42.239 "trsvcid": "4420", 00:12:42.239 "trtype": "TCP" 00:12:42.239 }, 00:12:42.239 "peer_address": { 00:12:42.239 "adrfam": "IPv4", 00:12:42.239 "traddr": "10.0.0.1", 00:12:42.239 "trsvcid": "52454", 00:12:42.239 "trtype": "TCP" 00:12:42.239 }, 00:12:42.239 "qid": 0, 00:12:42.239 "state": "enabled", 00:12:42.239 "thread": "nvmf_tgt_poll_group_000" 00:12:42.239 } 00:12:42.239 ]' 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.239 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.240 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.240 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.499 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:12:43.067 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.067 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:43.067 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.067 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.067 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.067 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.067 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:43.068 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.327 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.584 00:12:43.584 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.584 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.584 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.842 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.842 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.842 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.842 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.842 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.842 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.842 { 00:12:43.842 "auth": { 00:12:43.842 "dhgroup": "null", 00:12:43.842 "digest": "sha384", 00:12:43.842 "state": "completed" 00:12:43.842 }, 00:12:43.842 "cntlid": 55, 00:12:43.842 "listen_address": { 00:12:43.842 "adrfam": "IPv4", 00:12:43.842 "traddr": "10.0.0.2", 00:12:43.842 "trsvcid": "4420", 00:12:43.842 "trtype": "TCP" 00:12:43.842 }, 00:12:43.842 "peer_address": { 00:12:43.842 "adrfam": "IPv4", 00:12:43.842 "traddr": "10.0.0.1", 00:12:43.843 "trsvcid": "52484", 00:12:43.843 "trtype": "TCP" 00:12:43.843 }, 00:12:43.843 "qid": 0, 00:12:43.843 "state": "enabled", 00:12:43.843 "thread": "nvmf_tgt_poll_group_000" 00:12:43.843 } 00:12:43.843 ]' 00:12:43.843 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.843 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.843 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.843 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:43.843 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.843 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.843 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.843 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.100 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:12:44.665 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.924 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.182 00:12:45.182 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.182 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.182 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.440 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.440 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.440 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.440 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.440 18:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.440 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.440 { 00:12:45.440 "auth": { 00:12:45.440 "dhgroup": "ffdhe2048", 00:12:45.440 "digest": "sha384", 00:12:45.440 "state": "completed" 00:12:45.440 }, 00:12:45.440 "cntlid": 57, 00:12:45.440 "listen_address": { 00:12:45.440 "adrfam": "IPv4", 00:12:45.440 "traddr": "10.0.0.2", 00:12:45.440 "trsvcid": "4420", 00:12:45.440 "trtype": "TCP" 00:12:45.440 }, 00:12:45.440 "peer_address": { 00:12:45.440 "adrfam": "IPv4", 00:12:45.440 "traddr": "10.0.0.1", 00:12:45.440 "trsvcid": "52516", 00:12:45.440 "trtype": "TCP" 00:12:45.440 }, 00:12:45.440 "qid": 0, 00:12:45.440 "state": "enabled", 00:12:45.440 "thread": "nvmf_tgt_poll_group_000" 00:12:45.440 } 00:12:45.440 ]' 00:12:45.440 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.702 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.702 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.702 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:45.702 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.702 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.702 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.702 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.968 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:12:46.533 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.533 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:46.533 18:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.533 18:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.533 18:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.533 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.533 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:46.533 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.533 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.791 00:12:47.049 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.049 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.049 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.049 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.049 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.049 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.049 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.049 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.049 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.049 { 00:12:47.049 "auth": { 00:12:47.049 "dhgroup": "ffdhe2048", 00:12:47.049 "digest": "sha384", 00:12:47.049 "state": "completed" 00:12:47.049 }, 00:12:47.049 "cntlid": 59, 00:12:47.049 "listen_address": { 00:12:47.049 "adrfam": "IPv4", 00:12:47.049 "traddr": "10.0.0.2", 00:12:47.049 "trsvcid": "4420", 00:12:47.049 "trtype": "TCP" 00:12:47.049 }, 00:12:47.049 "peer_address": { 00:12:47.049 "adrfam": "IPv4", 00:12:47.049 "traddr": "10.0.0.1", 00:12:47.049 "trsvcid": "52556", 00:12:47.049 "trtype": "TCP" 00:12:47.049 }, 00:12:47.049 "qid": 0, 00:12:47.049 "state": "enabled", 00:12:47.049 "thread": "nvmf_tgt_poll_group_000" 00:12:47.049 } 00:12:47.049 ]' 00:12:47.049 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.307 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.307 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.307 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:47.307 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.308 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.308 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.308 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.565 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:12:48.131 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.131 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:48.131 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.131 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.131 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.131 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.131 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:48.131 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.388 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.644 00:12:48.644 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.644 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.644 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.901 { 00:12:48.901 "auth": { 00:12:48.901 "dhgroup": "ffdhe2048", 00:12:48.901 "digest": "sha384", 00:12:48.901 "state": "completed" 00:12:48.901 }, 00:12:48.901 "cntlid": 61, 00:12:48.901 "listen_address": { 00:12:48.901 "adrfam": "IPv4", 00:12:48.901 "traddr": "10.0.0.2", 00:12:48.901 "trsvcid": "4420", 00:12:48.901 "trtype": "TCP" 00:12:48.901 }, 00:12:48.901 "peer_address": { 00:12:48.901 "adrfam": "IPv4", 00:12:48.901 "traddr": "10.0.0.1", 00:12:48.901 "trsvcid": "52584", 00:12:48.901 "trtype": "TCP" 00:12:48.901 }, 00:12:48.901 "qid": 0, 00:12:48.901 "state": "enabled", 00:12:48.901 "thread": "nvmf_tgt_poll_group_000" 00:12:48.901 } 00:12:48.901 ]' 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.901 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.158 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:12:49.732 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.732 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:49.732 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.732 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.732 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.732 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.732 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:49.732 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:49.989 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.246 00:12:50.246 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.246 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.246 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.504 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.504 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.504 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.504 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.504 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.504 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.504 { 00:12:50.504 "auth": { 00:12:50.504 "dhgroup": "ffdhe2048", 00:12:50.504 "digest": "sha384", 00:12:50.504 "state": "completed" 00:12:50.504 }, 00:12:50.504 "cntlid": 63, 00:12:50.504 "listen_address": { 00:12:50.504 "adrfam": "IPv4", 00:12:50.504 "traddr": "10.0.0.2", 00:12:50.504 "trsvcid": "4420", 00:12:50.504 "trtype": "TCP" 00:12:50.504 }, 00:12:50.504 "peer_address": { 00:12:50.504 "adrfam": "IPv4", 00:12:50.504 "traddr": "10.0.0.1", 00:12:50.504 "trsvcid": "52602", 00:12:50.504 "trtype": "TCP" 00:12:50.504 }, 00:12:50.504 "qid": 0, 00:12:50.504 "state": "enabled", 00:12:50.504 "thread": "nvmf_tgt_poll_group_000" 00:12:50.504 } 00:12:50.504 ]' 00:12:50.504 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.504 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:50.504 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.504 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:50.504 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.504 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.504 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.504 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.763 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:12:51.326 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.326 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:51.326 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.326 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.326 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.326 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:51.326 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.326 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:51.326 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.584 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.841 00:12:51.841 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.841 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.841 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.099 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.099 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.099 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.100 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.100 { 00:12:52.100 "auth": { 00:12:52.100 "dhgroup": "ffdhe3072", 00:12:52.100 "digest": "sha384", 00:12:52.100 "state": "completed" 00:12:52.100 }, 00:12:52.100 "cntlid": 65, 00:12:52.100 "listen_address": { 00:12:52.100 "adrfam": "IPv4", 00:12:52.100 "traddr": "10.0.0.2", 00:12:52.100 "trsvcid": "4420", 00:12:52.100 "trtype": "TCP" 00:12:52.100 }, 00:12:52.100 "peer_address": { 00:12:52.100 "adrfam": "IPv4", 00:12:52.100 "traddr": "10.0.0.1", 00:12:52.100 "trsvcid": "38248", 00:12:52.100 "trtype": "TCP" 00:12:52.100 }, 00:12:52.100 "qid": 0, 00:12:52.100 "state": "enabled", 00:12:52.100 "thread": "nvmf_tgt_poll_group_000" 00:12:52.100 } 00:12:52.100 ]' 00:12:52.100 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.100 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:52.100 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.100 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:52.100 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.357 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.358 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.358 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.615 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:12:53.197 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.197 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:53.197 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.197 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.197 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.197 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.197 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:53.197 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.480 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.739 00:12:53.739 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.739 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.739 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.998 { 00:12:53.998 "auth": { 00:12:53.998 "dhgroup": "ffdhe3072", 00:12:53.998 "digest": "sha384", 00:12:53.998 "state": "completed" 00:12:53.998 }, 00:12:53.998 "cntlid": 67, 00:12:53.998 "listen_address": { 00:12:53.998 "adrfam": "IPv4", 00:12:53.998 "traddr": "10.0.0.2", 00:12:53.998 "trsvcid": "4420", 00:12:53.998 "trtype": "TCP" 00:12:53.998 }, 00:12:53.998 "peer_address": { 00:12:53.998 "adrfam": "IPv4", 00:12:53.998 "traddr": "10.0.0.1", 00:12:53.998 "trsvcid": "38270", 00:12:53.998 "trtype": "TCP" 00:12:53.998 }, 00:12:53.998 "qid": 0, 00:12:53.998 "state": "enabled", 00:12:53.998 "thread": "nvmf_tgt_poll_group_000" 00:12:53.998 } 00:12:53.998 ]' 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.998 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.258 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:12:54.826 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.826 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:54.826 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.826 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.826 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.826 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.826 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:54.826 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.086 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.345 00:12:55.345 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.345 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.345 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.605 { 00:12:55.605 "auth": { 00:12:55.605 "dhgroup": "ffdhe3072", 00:12:55.605 "digest": "sha384", 00:12:55.605 "state": "completed" 00:12:55.605 }, 00:12:55.605 "cntlid": 69, 00:12:55.605 "listen_address": { 00:12:55.605 "adrfam": "IPv4", 00:12:55.605 "traddr": "10.0.0.2", 00:12:55.605 "trsvcid": "4420", 00:12:55.605 "trtype": "TCP" 00:12:55.605 }, 00:12:55.605 "peer_address": { 00:12:55.605 "adrfam": "IPv4", 00:12:55.605 "traddr": "10.0.0.1", 00:12:55.605 "trsvcid": "38284", 00:12:55.605 "trtype": "TCP" 00:12:55.605 }, 00:12:55.605 "qid": 0, 00:12:55.605 "state": "enabled", 00:12:55.605 "thread": "nvmf_tgt_poll_group_000" 00:12:55.605 } 00:12:55.605 ]' 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.605 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.864 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:12:56.432 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.432 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:56.432 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.432 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.432 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.432 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.432 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:56.432 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:56.741 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:56.741 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.741 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:56.741 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:56.741 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:56.741 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.742 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:12:56.742 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.742 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.742 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.742 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.742 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.000 00:12:57.000 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.000 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.000 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.259 { 00:12:57.259 "auth": { 00:12:57.259 "dhgroup": "ffdhe3072", 00:12:57.259 "digest": "sha384", 00:12:57.259 "state": "completed" 00:12:57.259 }, 00:12:57.259 "cntlid": 71, 00:12:57.259 "listen_address": { 00:12:57.259 "adrfam": "IPv4", 00:12:57.259 "traddr": "10.0.0.2", 00:12:57.259 "trsvcid": "4420", 00:12:57.259 "trtype": "TCP" 00:12:57.259 }, 00:12:57.259 "peer_address": { 00:12:57.259 "adrfam": "IPv4", 00:12:57.259 "traddr": "10.0.0.1", 00:12:57.259 "trsvcid": "38296", 00:12:57.259 "trtype": "TCP" 00:12:57.259 }, 00:12:57.259 "qid": 0, 00:12:57.259 "state": "enabled", 00:12:57.259 "thread": "nvmf_tgt_poll_group_000" 00:12:57.259 } 00:12:57.259 ]' 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.259 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.517 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:12:58.083 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.083 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:58.083 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.083 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.083 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.083 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:58.083 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.083 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:58.083 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.342 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.601 00:12:58.601 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.601 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.601 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.861 { 00:12:58.861 "auth": { 00:12:58.861 "dhgroup": "ffdhe4096", 00:12:58.861 "digest": "sha384", 00:12:58.861 "state": "completed" 00:12:58.861 }, 00:12:58.861 "cntlid": 73, 00:12:58.861 "listen_address": { 00:12:58.861 "adrfam": "IPv4", 00:12:58.861 "traddr": "10.0.0.2", 00:12:58.861 "trsvcid": "4420", 00:12:58.861 "trtype": "TCP" 00:12:58.861 }, 00:12:58.861 "peer_address": { 00:12:58.861 "adrfam": "IPv4", 00:12:58.861 "traddr": "10.0.0.1", 00:12:58.861 "trsvcid": "38324", 00:12:58.861 "trtype": "TCP" 00:12:58.861 }, 00:12:58.861 "qid": 0, 00:12:58.861 "state": "enabled", 00:12:58.861 "thread": "nvmf_tgt_poll_group_000" 00:12:58.861 } 00:12:58.861 ]' 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.861 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.120 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:12:59.688 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.688 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:12:59.688 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.688 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.688 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.688 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.688 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.688 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.947 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.207 00:13:00.207 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.207 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.207 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.467 { 00:13:00.467 "auth": { 00:13:00.467 "dhgroup": "ffdhe4096", 00:13:00.467 "digest": "sha384", 00:13:00.467 "state": "completed" 00:13:00.467 }, 00:13:00.467 "cntlid": 75, 00:13:00.467 "listen_address": { 00:13:00.467 "adrfam": "IPv4", 00:13:00.467 "traddr": "10.0.0.2", 00:13:00.467 "trsvcid": "4420", 00:13:00.467 "trtype": "TCP" 00:13:00.467 }, 00:13:00.467 "peer_address": { 00:13:00.467 "adrfam": "IPv4", 00:13:00.467 "traddr": "10.0.0.1", 00:13:00.467 "trsvcid": "38360", 00:13:00.467 "trtype": "TCP" 00:13:00.467 }, 00:13:00.467 "qid": 0, 00:13:00.467 "state": "enabled", 00:13:00.467 "thread": "nvmf_tgt_poll_group_000" 00:13:00.467 } 00:13:00.467 ]' 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:00.467 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.467 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.467 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.467 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.727 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:13:01.295 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.295 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:01.295 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.295 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.295 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.295 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.295 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:01.295 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.554 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.813 00:13:01.813 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.813 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.813 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.072 { 00:13:02.072 "auth": { 00:13:02.072 "dhgroup": "ffdhe4096", 00:13:02.072 "digest": "sha384", 00:13:02.072 "state": "completed" 00:13:02.072 }, 00:13:02.072 "cntlid": 77, 00:13:02.072 "listen_address": { 00:13:02.072 "adrfam": "IPv4", 00:13:02.072 "traddr": "10.0.0.2", 00:13:02.072 "trsvcid": "4420", 00:13:02.072 "trtype": "TCP" 00:13:02.072 }, 00:13:02.072 "peer_address": { 00:13:02.072 "adrfam": "IPv4", 00:13:02.072 "traddr": "10.0.0.1", 00:13:02.072 "trsvcid": "45134", 00:13:02.072 "trtype": "TCP" 00:13:02.072 }, 00:13:02.072 "qid": 0, 00:13:02.072 "state": "enabled", 00:13:02.072 "thread": "nvmf_tgt_poll_group_000" 00:13:02.072 } 00:13:02.072 ]' 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:02.072 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:02.330 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.330 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.330 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.330 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:13:02.898 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.898 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:02.898 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.898 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.898 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.898 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.898 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:02.898 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.157 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.416 00:13:03.416 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.416 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.416 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.675 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.675 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.675 18:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.675 18:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.675 18:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.675 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.675 { 00:13:03.675 "auth": { 00:13:03.675 "dhgroup": "ffdhe4096", 00:13:03.675 "digest": "sha384", 00:13:03.675 "state": "completed" 00:13:03.675 }, 00:13:03.675 "cntlid": 79, 00:13:03.675 "listen_address": { 00:13:03.675 "adrfam": "IPv4", 00:13:03.675 "traddr": "10.0.0.2", 00:13:03.675 "trsvcid": "4420", 00:13:03.675 "trtype": "TCP" 00:13:03.675 }, 00:13:03.675 "peer_address": { 00:13:03.675 "adrfam": "IPv4", 00:13:03.675 "traddr": "10.0.0.1", 00:13:03.675 "trsvcid": "45162", 00:13:03.675 "trtype": "TCP" 00:13:03.675 }, 00:13:03.675 "qid": 0, 00:13:03.675 "state": "enabled", 00:13:03.675 "thread": "nvmf_tgt_poll_group_000" 00:13:03.675 } 00:13:03.675 ]' 00:13:03.675 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.675 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:03.934 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.934 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:03.934 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.934 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.934 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.934 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.192 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:13:04.758 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.758 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:04.758 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.758 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.758 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.758 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:04.758 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.758 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:04.758 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.037 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.295 00:13:05.295 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.295 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.295 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.554 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.554 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.554 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.554 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.554 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.554 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.554 { 00:13:05.554 "auth": { 00:13:05.554 "dhgroup": "ffdhe6144", 00:13:05.554 "digest": "sha384", 00:13:05.554 "state": "completed" 00:13:05.554 }, 00:13:05.554 "cntlid": 81, 00:13:05.554 "listen_address": { 00:13:05.554 "adrfam": "IPv4", 00:13:05.554 "traddr": "10.0.0.2", 00:13:05.554 "trsvcid": "4420", 00:13:05.554 "trtype": "TCP" 00:13:05.554 }, 00:13:05.554 "peer_address": { 00:13:05.554 "adrfam": "IPv4", 00:13:05.554 "traddr": "10.0.0.1", 00:13:05.554 "trsvcid": "45192", 00:13:05.554 "trtype": "TCP" 00:13:05.554 }, 00:13:05.554 "qid": 0, 00:13:05.554 "state": "enabled", 00:13:05.554 "thread": "nvmf_tgt_poll_group_000" 00:13:05.554 } 00:13:05.554 ]' 00:13:05.554 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.554 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:05.554 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.554 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:05.554 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.554 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.554 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.554 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.813 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:13:06.381 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.381 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:06.381 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.381 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.381 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.381 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.381 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:06.381 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.640 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.899 00:13:07.158 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.158 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.158 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.158 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.158 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.158 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.158 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.158 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.158 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.158 { 00:13:07.158 "auth": { 00:13:07.158 "dhgroup": "ffdhe6144", 00:13:07.158 "digest": "sha384", 00:13:07.158 "state": "completed" 00:13:07.158 }, 00:13:07.158 "cntlid": 83, 00:13:07.158 "listen_address": { 00:13:07.158 "adrfam": "IPv4", 00:13:07.158 "traddr": "10.0.0.2", 00:13:07.158 "trsvcid": "4420", 00:13:07.158 "trtype": "TCP" 00:13:07.158 }, 00:13:07.158 "peer_address": { 00:13:07.158 "adrfam": "IPv4", 00:13:07.158 "traddr": "10.0.0.1", 00:13:07.158 "trsvcid": "45208", 00:13:07.158 "trtype": "TCP" 00:13:07.158 }, 00:13:07.158 "qid": 0, 00:13:07.158 "state": "enabled", 00:13:07.158 "thread": "nvmf_tgt_poll_group_000" 00:13:07.158 } 00:13:07.158 ]' 00:13:07.158 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.417 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.417 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.417 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:07.417 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.417 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.417 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.417 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.676 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.243 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.812 00:13:08.812 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.812 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.812 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.070 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.070 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.070 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.070 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.070 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.070 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.070 { 00:13:09.070 "auth": { 00:13:09.070 "dhgroup": "ffdhe6144", 00:13:09.070 "digest": "sha384", 00:13:09.070 "state": "completed" 00:13:09.070 }, 00:13:09.070 "cntlid": 85, 00:13:09.070 "listen_address": { 00:13:09.070 "adrfam": "IPv4", 00:13:09.070 "traddr": "10.0.0.2", 00:13:09.071 "trsvcid": "4420", 00:13:09.071 "trtype": "TCP" 00:13:09.071 }, 00:13:09.071 "peer_address": { 00:13:09.071 "adrfam": "IPv4", 00:13:09.071 "traddr": "10.0.0.1", 00:13:09.071 "trsvcid": "45234", 00:13:09.071 "trtype": "TCP" 00:13:09.071 }, 00:13:09.071 "qid": 0, 00:13:09.071 "state": "enabled", 00:13:09.071 "thread": "nvmf_tgt_poll_group_000" 00:13:09.071 } 00:13:09.071 ]' 00:13:09.071 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.071 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.071 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.071 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:09.071 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.071 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.071 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.071 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.329 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:13:09.895 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.895 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:09.895 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.895 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.895 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.895 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.895 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:09.895 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.153 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.410 00:13:10.410 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.410 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.410 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.668 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.668 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.668 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.668 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.668 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.668 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.668 { 00:13:10.668 "auth": { 00:13:10.668 "dhgroup": "ffdhe6144", 00:13:10.668 "digest": "sha384", 00:13:10.668 "state": "completed" 00:13:10.668 }, 00:13:10.668 "cntlid": 87, 00:13:10.668 "listen_address": { 00:13:10.668 "adrfam": "IPv4", 00:13:10.668 "traddr": "10.0.0.2", 00:13:10.668 "trsvcid": "4420", 00:13:10.668 "trtype": "TCP" 00:13:10.668 }, 00:13:10.668 "peer_address": { 00:13:10.668 "adrfam": "IPv4", 00:13:10.668 "traddr": "10.0.0.1", 00:13:10.668 "trsvcid": "45272", 00:13:10.668 "trtype": "TCP" 00:13:10.668 }, 00:13:10.668 "qid": 0, 00:13:10.668 "state": "enabled", 00:13:10.668 "thread": "nvmf_tgt_poll_group_000" 00:13:10.668 } 00:13:10.668 ]' 00:13:10.668 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.668 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:10.668 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.927 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.927 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.927 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.927 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.927 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.188 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:11.753 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.754 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.754 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.754 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.754 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.754 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.754 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.317 00:13:12.317 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.317 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.317 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.575 { 00:13:12.575 "auth": { 00:13:12.575 "dhgroup": "ffdhe8192", 00:13:12.575 "digest": "sha384", 00:13:12.575 "state": "completed" 00:13:12.575 }, 00:13:12.575 "cntlid": 89, 00:13:12.575 "listen_address": { 00:13:12.575 "adrfam": "IPv4", 00:13:12.575 "traddr": "10.0.0.2", 00:13:12.575 "trsvcid": "4420", 00:13:12.575 "trtype": "TCP" 00:13:12.575 }, 00:13:12.575 "peer_address": { 00:13:12.575 "adrfam": "IPv4", 00:13:12.575 "traddr": "10.0.0.1", 00:13:12.575 "trsvcid": "45860", 00:13:12.575 "trtype": "TCP" 00:13:12.575 }, 00:13:12.575 "qid": 0, 00:13:12.575 "state": "enabled", 00:13:12.575 "thread": "nvmf_tgt_poll_group_000" 00:13:12.575 } 00:13:12.575 ]' 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:12.575 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:12.832 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.832 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.832 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.832 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:13:13.396 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.396 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:13.396 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.396 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.396 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.396 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:13.396 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:13.396 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.653 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.219 00:13:14.219 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.219 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.219 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.476 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.476 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.476 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.476 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.476 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.476 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:14.476 { 00:13:14.476 "auth": { 00:13:14.476 "dhgroup": "ffdhe8192", 00:13:14.476 "digest": "sha384", 00:13:14.476 "state": "completed" 00:13:14.476 }, 00:13:14.476 "cntlid": 91, 00:13:14.476 "listen_address": { 00:13:14.476 "adrfam": "IPv4", 00:13:14.476 "traddr": "10.0.0.2", 00:13:14.476 "trsvcid": "4420", 00:13:14.476 "trtype": "TCP" 00:13:14.476 }, 00:13:14.476 "peer_address": { 00:13:14.476 "adrfam": "IPv4", 00:13:14.476 "traddr": "10.0.0.1", 00:13:14.476 "trsvcid": "45894", 00:13:14.476 "trtype": "TCP" 00:13:14.476 }, 00:13:14.476 "qid": 0, 00:13:14.476 "state": "enabled", 00:13:14.476 "thread": "nvmf_tgt_poll_group_000" 00:13:14.476 } 00:13:14.476 ]' 00:13:14.476 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:14.476 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:14.476 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:14.476 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:14.476 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:14.734 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.734 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.734 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.734 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:13:15.300 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.558 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:15.558 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.558 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.558 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.558 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:15.558 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:15.558 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.558 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.125 00:13:16.125 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.125 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.125 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:16.385 { 00:13:16.385 "auth": { 00:13:16.385 "dhgroup": "ffdhe8192", 00:13:16.385 "digest": "sha384", 00:13:16.385 "state": "completed" 00:13:16.385 }, 00:13:16.385 "cntlid": 93, 00:13:16.385 "listen_address": { 00:13:16.385 "adrfam": "IPv4", 00:13:16.385 "traddr": "10.0.0.2", 00:13:16.385 "trsvcid": "4420", 00:13:16.385 "trtype": "TCP" 00:13:16.385 }, 00:13:16.385 "peer_address": { 00:13:16.385 "adrfam": "IPv4", 00:13:16.385 "traddr": "10.0.0.1", 00:13:16.385 "trsvcid": "45934", 00:13:16.385 "trtype": "TCP" 00:13:16.385 }, 00:13:16.385 "qid": 0, 00:13:16.385 "state": "enabled", 00:13:16.385 "thread": "nvmf_tgt_poll_group_000" 00:13:16.385 } 00:13:16.385 ]' 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:16.385 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:16.643 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.643 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.643 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.643 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:13:17.210 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.469 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:17.469 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.469 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.469 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.469 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:17.469 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:17.469 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:17.469 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:18.046 00:13:18.046 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:18.046 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.046 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.304 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.304 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.304 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.304 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.304 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.304 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:18.304 { 00:13:18.304 "auth": { 00:13:18.304 "dhgroup": "ffdhe8192", 00:13:18.304 "digest": "sha384", 00:13:18.304 "state": "completed" 00:13:18.304 }, 00:13:18.304 "cntlid": 95, 00:13:18.304 "listen_address": { 00:13:18.304 "adrfam": "IPv4", 00:13:18.304 "traddr": "10.0.0.2", 00:13:18.304 "trsvcid": "4420", 00:13:18.304 "trtype": "TCP" 00:13:18.304 }, 00:13:18.304 "peer_address": { 00:13:18.304 "adrfam": "IPv4", 00:13:18.304 "traddr": "10.0.0.1", 00:13:18.304 "trsvcid": "45966", 00:13:18.304 "trtype": "TCP" 00:13:18.304 }, 00:13:18.304 "qid": 0, 00:13:18.304 "state": "enabled", 00:13:18.304 "thread": "nvmf_tgt_poll_group_000" 00:13:18.304 } 00:13:18.304 ]' 00:13:18.304 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:18.304 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:18.304 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:18.561 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:18.561 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:18.561 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.561 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.561 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.561 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:13:19.128 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.128 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:19.128 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.128 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.128 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.128 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:19.128 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:19.128 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:19.128 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:19.128 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.697 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.697 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.956 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.956 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.956 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.956 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.956 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.956 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.956 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.956 { 00:13:19.956 "auth": { 00:13:19.956 "dhgroup": "null", 00:13:19.956 "digest": "sha512", 00:13:19.956 "state": "completed" 00:13:19.956 }, 00:13:19.956 "cntlid": 97, 00:13:19.956 "listen_address": { 00:13:19.956 "adrfam": "IPv4", 00:13:19.956 "traddr": "10.0.0.2", 00:13:19.956 "trsvcid": "4420", 00:13:19.956 "trtype": "TCP" 00:13:19.956 }, 00:13:19.956 "peer_address": { 00:13:19.956 "adrfam": "IPv4", 00:13:19.956 "traddr": "10.0.0.1", 00:13:19.956 "trsvcid": "45990", 00:13:19.956 "trtype": "TCP" 00:13:19.956 }, 00:13:19.956 "qid": 0, 00:13:19.956 "state": "enabled", 00:13:19.956 "thread": "nvmf_tgt_poll_group_000" 00:13:19.956 } 00:13:19.956 ]' 00:13:19.956 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.956 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.214 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:20.214 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:20.214 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.214 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.214 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.214 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.473 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.066 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.326 00:13:21.326 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.326 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.326 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.586 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.586 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.586 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.586 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.586 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.586 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.586 { 00:13:21.586 "auth": { 00:13:21.586 "dhgroup": "null", 00:13:21.586 "digest": "sha512", 00:13:21.586 "state": "completed" 00:13:21.586 }, 00:13:21.586 "cntlid": 99, 00:13:21.586 "listen_address": { 00:13:21.586 "adrfam": "IPv4", 00:13:21.586 "traddr": "10.0.0.2", 00:13:21.586 "trsvcid": "4420", 00:13:21.586 "trtype": "TCP" 00:13:21.586 }, 00:13:21.586 "peer_address": { 00:13:21.586 "adrfam": "IPv4", 00:13:21.586 "traddr": "10.0.0.1", 00:13:21.586 "trsvcid": "46022", 00:13:21.586 "trtype": "TCP" 00:13:21.586 }, 00:13:21.586 "qid": 0, 00:13:21.586 "state": "enabled", 00:13:21.586 "thread": "nvmf_tgt_poll_group_000" 00:13:21.586 } 00:13:21.586 ]' 00:13:21.586 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.586 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.586 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.845 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:21.845 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.845 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.845 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.845 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.103 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.671 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.929 00:13:22.929 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.929 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.929 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.187 { 00:13:23.187 "auth": { 00:13:23.187 "dhgroup": "null", 00:13:23.187 "digest": "sha512", 00:13:23.187 "state": "completed" 00:13:23.187 }, 00:13:23.187 "cntlid": 101, 00:13:23.187 "listen_address": { 00:13:23.187 "adrfam": "IPv4", 00:13:23.187 "traddr": "10.0.0.2", 00:13:23.187 "trsvcid": "4420", 00:13:23.187 "trtype": "TCP" 00:13:23.187 }, 00:13:23.187 "peer_address": { 00:13:23.187 "adrfam": "IPv4", 00:13:23.187 "traddr": "10.0.0.1", 00:13:23.187 "trsvcid": "52354", 00:13:23.187 "trtype": "TCP" 00:13:23.187 }, 00:13:23.187 "qid": 0, 00:13:23.187 "state": "enabled", 00:13:23.187 "thread": "nvmf_tgt_poll_group_000" 00:13:23.187 } 00:13:23.187 ]' 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:23.187 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.445 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.445 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.445 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.445 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.704 00:13:24.704 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.704 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.704 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.964 { 00:13:24.964 "auth": { 00:13:24.964 "dhgroup": "null", 00:13:24.964 "digest": "sha512", 00:13:24.964 "state": "completed" 00:13:24.964 }, 00:13:24.964 "cntlid": 103, 00:13:24.964 "listen_address": { 00:13:24.964 "adrfam": "IPv4", 00:13:24.964 "traddr": "10.0.0.2", 00:13:24.964 "trsvcid": "4420", 00:13:24.964 "trtype": "TCP" 00:13:24.964 }, 00:13:24.964 "peer_address": { 00:13:24.964 "adrfam": "IPv4", 00:13:24.964 "traddr": "10.0.0.1", 00:13:24.964 "trsvcid": "52370", 00:13:24.964 "trtype": "TCP" 00:13:24.964 }, 00:13:24.964 "qid": 0, 00:13:24.964 "state": "enabled", 00:13:24.964 "thread": "nvmf_tgt_poll_group_000" 00:13:24.964 } 00:13:24.964 ]' 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.964 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.222 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:13:25.789 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.789 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:25.789 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.789 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.789 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.789 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:25.789 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.789 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:25.789 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.048 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.306 00:13:26.306 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.306 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.306 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.564 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.564 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.564 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.564 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.564 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.564 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.564 { 00:13:26.564 "auth": { 00:13:26.564 "dhgroup": "ffdhe2048", 00:13:26.564 "digest": "sha512", 00:13:26.564 "state": "completed" 00:13:26.564 }, 00:13:26.564 "cntlid": 105, 00:13:26.564 "listen_address": { 00:13:26.564 "adrfam": "IPv4", 00:13:26.564 "traddr": "10.0.0.2", 00:13:26.564 "trsvcid": "4420", 00:13:26.564 "trtype": "TCP" 00:13:26.564 }, 00:13:26.564 "peer_address": { 00:13:26.564 "adrfam": "IPv4", 00:13:26.564 "traddr": "10.0.0.1", 00:13:26.564 "trsvcid": "52404", 00:13:26.564 "trtype": "TCP" 00:13:26.564 }, 00:13:26.564 "qid": 0, 00:13:26.564 "state": "enabled", 00:13:26.564 "thread": "nvmf_tgt_poll_group_000" 00:13:26.564 } 00:13:26.564 ]' 00:13:26.564 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.564 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.564 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.564 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:26.564 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.564 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.564 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.564 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.823 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:13:27.392 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.392 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:27.392 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.392 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.392 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.392 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.392 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:27.392 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.650 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.909 00:13:27.909 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.909 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:27.909 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.168 { 00:13:28.168 "auth": { 00:13:28.168 "dhgroup": "ffdhe2048", 00:13:28.168 "digest": "sha512", 00:13:28.168 "state": "completed" 00:13:28.168 }, 00:13:28.168 "cntlid": 107, 00:13:28.168 "listen_address": { 00:13:28.168 "adrfam": "IPv4", 00:13:28.168 "traddr": "10.0.0.2", 00:13:28.168 "trsvcid": "4420", 00:13:28.168 "trtype": "TCP" 00:13:28.168 }, 00:13:28.168 "peer_address": { 00:13:28.168 "adrfam": "IPv4", 00:13:28.168 "traddr": "10.0.0.1", 00:13:28.168 "trsvcid": "52434", 00:13:28.168 "trtype": "TCP" 00:13:28.168 }, 00:13:28.168 "qid": 0, 00:13:28.168 "state": "enabled", 00:13:28.168 "thread": "nvmf_tgt_poll_group_000" 00:13:28.168 } 00:13:28.168 ]' 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:28.168 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.426 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.426 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.426 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.426 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:13:28.993 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.252 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.510 00:13:29.511 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.511 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.511 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.769 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.769 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.769 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.769 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.769 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.769 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.769 { 00:13:29.769 "auth": { 00:13:29.769 "dhgroup": "ffdhe2048", 00:13:29.769 "digest": "sha512", 00:13:29.769 "state": "completed" 00:13:29.769 }, 00:13:29.769 "cntlid": 109, 00:13:29.769 "listen_address": { 00:13:29.769 "adrfam": "IPv4", 00:13:29.769 "traddr": "10.0.0.2", 00:13:29.769 "trsvcid": "4420", 00:13:29.769 "trtype": "TCP" 00:13:29.769 }, 00:13:29.769 "peer_address": { 00:13:29.769 "adrfam": "IPv4", 00:13:29.769 "traddr": "10.0.0.1", 00:13:29.769 "trsvcid": "52470", 00:13:29.769 "trtype": "TCP" 00:13:29.769 }, 00:13:29.769 "qid": 0, 00:13:29.769 "state": "enabled", 00:13:29.769 "thread": "nvmf_tgt_poll_group_000" 00:13:29.769 } 00:13:29.769 ]' 00:13:29.769 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.028 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:30.028 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.028 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:30.028 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.028 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.028 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.028 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.287 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:30.854 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.112 00:13:31.371 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.371 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.371 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.371 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.371 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.371 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.371 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.371 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.371 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:31.371 { 00:13:31.371 "auth": { 00:13:31.371 "dhgroup": "ffdhe2048", 00:13:31.371 "digest": "sha512", 00:13:31.371 "state": "completed" 00:13:31.371 }, 00:13:31.371 "cntlid": 111, 00:13:31.371 "listen_address": { 00:13:31.371 "adrfam": "IPv4", 00:13:31.371 "traddr": "10.0.0.2", 00:13:31.371 "trsvcid": "4420", 00:13:31.371 "trtype": "TCP" 00:13:31.371 }, 00:13:31.371 "peer_address": { 00:13:31.371 "adrfam": "IPv4", 00:13:31.371 "traddr": "10.0.0.1", 00:13:31.371 "trsvcid": "52488", 00:13:31.371 "trtype": "TCP" 00:13:31.371 }, 00:13:31.371 "qid": 0, 00:13:31.371 "state": "enabled", 00:13:31.371 "thread": "nvmf_tgt_poll_group_000" 00:13:31.371 } 00:13:31.371 ]' 00:13:31.371 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.629 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.629 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:31.629 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.629 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:31.629 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.630 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.630 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.888 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:13:32.453 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.453 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:32.453 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.453 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.453 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.453 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:32.453 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:32.453 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:32.453 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:32.710 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:13:32.710 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:32.710 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:32.710 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:32.710 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:32.710 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.711 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.711 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.711 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.711 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.711 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.711 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.967 00:13:32.967 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.967 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.967 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:33.224 { 00:13:33.224 "auth": { 00:13:33.224 "dhgroup": "ffdhe3072", 00:13:33.224 "digest": "sha512", 00:13:33.224 "state": "completed" 00:13:33.224 }, 00:13:33.224 "cntlid": 113, 00:13:33.224 "listen_address": { 00:13:33.224 "adrfam": "IPv4", 00:13:33.224 "traddr": "10.0.0.2", 00:13:33.224 "trsvcid": "4420", 00:13:33.224 "trtype": "TCP" 00:13:33.224 }, 00:13:33.224 "peer_address": { 00:13:33.224 "adrfam": "IPv4", 00:13:33.224 "traddr": "10.0.0.1", 00:13:33.224 "trsvcid": "39866", 00:13:33.224 "trtype": "TCP" 00:13:33.224 }, 00:13:33.224 "qid": 0, 00:13:33.224 "state": "enabled", 00:13:33.224 "thread": "nvmf_tgt_poll_group_000" 00:13:33.224 } 00:13:33.224 ]' 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.224 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.481 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:13:34.046 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.046 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:34.046 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.046 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.046 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.046 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:34.046 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:34.046 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.304 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.562 00:13:34.562 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.563 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.563 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.821 { 00:13:34.821 "auth": { 00:13:34.821 "dhgroup": "ffdhe3072", 00:13:34.821 "digest": "sha512", 00:13:34.821 "state": "completed" 00:13:34.821 }, 00:13:34.821 "cntlid": 115, 00:13:34.821 "listen_address": { 00:13:34.821 "adrfam": "IPv4", 00:13:34.821 "traddr": "10.0.0.2", 00:13:34.821 "trsvcid": "4420", 00:13:34.821 "trtype": "TCP" 00:13:34.821 }, 00:13:34.821 "peer_address": { 00:13:34.821 "adrfam": "IPv4", 00:13:34.821 "traddr": "10.0.0.1", 00:13:34.821 "trsvcid": "39900", 00:13:34.821 "trtype": "TCP" 00:13:34.821 }, 00:13:34.821 "qid": 0, 00:13:34.821 "state": "enabled", 00:13:34.821 "thread": "nvmf_tgt_poll_group_000" 00:13:34.821 } 00:13:34.821 ]' 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.821 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.080 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:13:35.647 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.647 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:35.647 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.647 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.647 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.647 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.647 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:35.647 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:35.906 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:13:35.906 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.906 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:35.906 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:35.906 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:35.906 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.906 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.906 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.906 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.906 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.907 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.907 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.180 00:13:36.180 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.180 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.180 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.438 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.438 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.438 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.438 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.438 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.438 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.438 { 00:13:36.438 "auth": { 00:13:36.438 "dhgroup": "ffdhe3072", 00:13:36.438 "digest": "sha512", 00:13:36.438 "state": "completed" 00:13:36.438 }, 00:13:36.438 "cntlid": 117, 00:13:36.438 "listen_address": { 00:13:36.438 "adrfam": "IPv4", 00:13:36.438 "traddr": "10.0.0.2", 00:13:36.438 "trsvcid": "4420", 00:13:36.438 "trtype": "TCP" 00:13:36.438 }, 00:13:36.438 "peer_address": { 00:13:36.438 "adrfam": "IPv4", 00:13:36.438 "traddr": "10.0.0.1", 00:13:36.438 "trsvcid": "39930", 00:13:36.438 "trtype": "TCP" 00:13:36.438 }, 00:13:36.438 "qid": 0, 00:13:36.438 "state": "enabled", 00:13:36.438 "thread": "nvmf_tgt_poll_group_000" 00:13:36.438 } 00:13:36.438 ]' 00:13:36.438 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.438 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.438 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:36.439 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:36.439 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.439 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.439 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.439 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.723 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:13:37.290 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.290 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:37.290 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.290 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.290 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.290 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.290 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:37.290 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:37.550 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:37.809 00:13:37.809 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.809 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.809 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.068 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.068 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.068 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.068 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.068 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.068 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.068 { 00:13:38.068 "auth": { 00:13:38.068 "dhgroup": "ffdhe3072", 00:13:38.068 "digest": "sha512", 00:13:38.068 "state": "completed" 00:13:38.068 }, 00:13:38.068 "cntlid": 119, 00:13:38.069 "listen_address": { 00:13:38.069 "adrfam": "IPv4", 00:13:38.069 "traddr": "10.0.0.2", 00:13:38.069 "trsvcid": "4420", 00:13:38.069 "trtype": "TCP" 00:13:38.069 }, 00:13:38.069 "peer_address": { 00:13:38.069 "adrfam": "IPv4", 00:13:38.069 "traddr": "10.0.0.1", 00:13:38.069 "trsvcid": "39974", 00:13:38.069 "trtype": "TCP" 00:13:38.069 }, 00:13:38.069 "qid": 0, 00:13:38.069 "state": "enabled", 00:13:38.069 "thread": "nvmf_tgt_poll_group_000" 00:13:38.069 } 00:13:38.069 ]' 00:13:38.069 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.069 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.069 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.069 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:38.069 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.327 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.327 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.327 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.327 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:13:38.895 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.895 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:38.895 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.895 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.895 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.895 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.895 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.895 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:38.895 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.154 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.413 00:13:39.413 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.413 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.413 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.672 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.672 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.672 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.672 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.672 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.673 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.673 { 00:13:39.673 "auth": { 00:13:39.673 "dhgroup": "ffdhe4096", 00:13:39.673 "digest": "sha512", 00:13:39.673 "state": "completed" 00:13:39.673 }, 00:13:39.673 "cntlid": 121, 00:13:39.673 "listen_address": { 00:13:39.673 "adrfam": "IPv4", 00:13:39.673 "traddr": "10.0.0.2", 00:13:39.673 "trsvcid": "4420", 00:13:39.673 "trtype": "TCP" 00:13:39.673 }, 00:13:39.673 "peer_address": { 00:13:39.673 "adrfam": "IPv4", 00:13:39.673 "traddr": "10.0.0.1", 00:13:39.673 "trsvcid": "40000", 00:13:39.673 "trtype": "TCP" 00:13:39.673 }, 00:13:39.673 "qid": 0, 00:13:39.673 "state": "enabled", 00:13:39.673 "thread": "nvmf_tgt_poll_group_000" 00:13:39.673 } 00:13:39.673 ]' 00:13:39.673 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.673 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.673 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.931 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:39.931 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.931 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.931 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.931 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.189 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:13:40.757 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.757 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:40.757 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.757 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.757 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.757 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.757 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:40.757 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.017 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.276 00:13:41.276 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.276 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.276 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.535 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.535 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.535 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.535 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.535 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.535 { 00:13:41.535 "auth": { 00:13:41.535 "dhgroup": "ffdhe4096", 00:13:41.535 "digest": "sha512", 00:13:41.535 "state": "completed" 00:13:41.535 }, 00:13:41.535 "cntlid": 123, 00:13:41.535 "listen_address": { 00:13:41.535 "adrfam": "IPv4", 00:13:41.535 "traddr": "10.0.0.2", 00:13:41.535 "trsvcid": "4420", 00:13:41.535 "trtype": "TCP" 00:13:41.535 }, 00:13:41.535 "peer_address": { 00:13:41.535 "adrfam": "IPv4", 00:13:41.535 "traddr": "10.0.0.1", 00:13:41.535 "trsvcid": "40014", 00:13:41.535 "trtype": "TCP" 00:13:41.535 }, 00:13:41.535 "qid": 0, 00:13:41.535 "state": "enabled", 00:13:41.535 "thread": "nvmf_tgt_poll_group_000" 00:13:41.535 } 00:13:41.535 ]' 00:13:41.535 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.535 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.535 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.535 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:41.535 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.535 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.535 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.535 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.794 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:13:42.359 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.359 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:42.359 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.359 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.359 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.359 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.359 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:42.359 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.618 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.907 00:13:42.907 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:42.907 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:42.907 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.180 { 00:13:43.180 "auth": { 00:13:43.180 "dhgroup": "ffdhe4096", 00:13:43.180 "digest": "sha512", 00:13:43.180 "state": "completed" 00:13:43.180 }, 00:13:43.180 "cntlid": 125, 00:13:43.180 "listen_address": { 00:13:43.180 "adrfam": "IPv4", 00:13:43.180 "traddr": "10.0.0.2", 00:13:43.180 "trsvcid": "4420", 00:13:43.180 "trtype": "TCP" 00:13:43.180 }, 00:13:43.180 "peer_address": { 00:13:43.180 "adrfam": "IPv4", 00:13:43.180 "traddr": "10.0.0.1", 00:13:43.180 "trsvcid": "34156", 00:13:43.180 "trtype": "TCP" 00:13:43.180 }, 00:13:43.180 "qid": 0, 00:13:43.180 "state": "enabled", 00:13:43.180 "thread": "nvmf_tgt_poll_group_000" 00:13:43.180 } 00:13:43.180 ]' 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:43.180 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.451 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.451 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.451 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.451 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:13:44.018 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.018 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:44.018 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.018 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.018 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.018 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.018 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:44.018 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:44.277 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:44.535 00:13:44.793 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:44.793 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.793 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:44.793 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.793 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.793 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.793 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.793 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:44.793 { 00:13:44.793 "auth": { 00:13:44.793 "dhgroup": "ffdhe4096", 00:13:44.793 "digest": "sha512", 00:13:44.793 "state": "completed" 00:13:44.793 }, 00:13:44.793 "cntlid": 127, 00:13:44.793 "listen_address": { 00:13:44.793 "adrfam": "IPv4", 00:13:44.793 "traddr": "10.0.0.2", 00:13:44.793 "trsvcid": "4420", 00:13:44.793 "trtype": "TCP" 00:13:44.793 }, 00:13:44.793 "peer_address": { 00:13:44.793 "adrfam": "IPv4", 00:13:44.793 "traddr": "10.0.0.1", 00:13:44.793 "trsvcid": "34178", 00:13:44.793 "trtype": "TCP" 00:13:44.793 }, 00:13:44.793 "qid": 0, 00:13:44.793 "state": "enabled", 00:13:44.793 "thread": "nvmf_tgt_poll_group_000" 00:13:44.793 } 00:13:44.793 ]' 00:13:44.793 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.052 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.052 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.052 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:45.052 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.052 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.052 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.052 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.310 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.877 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.136 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.136 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.136 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.394 00:13:46.394 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.394 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.394 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.654 { 00:13:46.654 "auth": { 00:13:46.654 "dhgroup": "ffdhe6144", 00:13:46.654 "digest": "sha512", 00:13:46.654 "state": "completed" 00:13:46.654 }, 00:13:46.654 "cntlid": 129, 00:13:46.654 "listen_address": { 00:13:46.654 "adrfam": "IPv4", 00:13:46.654 "traddr": "10.0.0.2", 00:13:46.654 "trsvcid": "4420", 00:13:46.654 "trtype": "TCP" 00:13:46.654 }, 00:13:46.654 "peer_address": { 00:13:46.654 "adrfam": "IPv4", 00:13:46.654 "traddr": "10.0.0.1", 00:13:46.654 "trsvcid": "34206", 00:13:46.654 "trtype": "TCP" 00:13:46.654 }, 00:13:46.654 "qid": 0, 00:13:46.654 "state": "enabled", 00:13:46.654 "thread": "nvmf_tgt_poll_group_000" 00:13:46.654 } 00:13:46.654 ]' 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.654 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.912 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:13:47.481 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.481 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:47.481 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.481 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.481 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.481 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.481 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:47.481 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.740 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.306 00:13:48.306 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.306 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.306 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.306 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.306 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.306 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.306 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.306 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:48.306 { 00:13:48.306 "auth": { 00:13:48.306 "dhgroup": "ffdhe6144", 00:13:48.306 "digest": "sha512", 00:13:48.306 "state": "completed" 00:13:48.306 }, 00:13:48.306 "cntlid": 131, 00:13:48.306 "listen_address": { 00:13:48.306 "adrfam": "IPv4", 00:13:48.306 "traddr": "10.0.0.2", 00:13:48.306 "trsvcid": "4420", 00:13:48.306 "trtype": "TCP" 00:13:48.306 }, 00:13:48.306 "peer_address": { 00:13:48.306 "adrfam": "IPv4", 00:13:48.306 "traddr": "10.0.0.1", 00:13:48.306 "trsvcid": "34240", 00:13:48.306 "trtype": "TCP" 00:13:48.306 }, 00:13:48.306 "qid": 0, 00:13:48.306 "state": "enabled", 00:13:48.306 "thread": "nvmf_tgt_poll_group_000" 00:13:48.306 } 00:13:48.306 ]' 00:13:48.306 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:48.564 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:48.564 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.564 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:48.564 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.564 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.564 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.564 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.823 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:13:49.387 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.387 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:49.387 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.387 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.387 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.387 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.388 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.954 00:13:49.954 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.954 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.954 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:50.211 { 00:13:50.211 "auth": { 00:13:50.211 "dhgroup": "ffdhe6144", 00:13:50.211 "digest": "sha512", 00:13:50.211 "state": "completed" 00:13:50.211 }, 00:13:50.211 "cntlid": 133, 00:13:50.211 "listen_address": { 00:13:50.211 "adrfam": "IPv4", 00:13:50.211 "traddr": "10.0.0.2", 00:13:50.211 "trsvcid": "4420", 00:13:50.211 "trtype": "TCP" 00:13:50.211 }, 00:13:50.211 "peer_address": { 00:13:50.211 "adrfam": "IPv4", 00:13:50.211 "traddr": "10.0.0.1", 00:13:50.211 "trsvcid": "34260", 00:13:50.211 "trtype": "TCP" 00:13:50.211 }, 00:13:50.211 "qid": 0, 00:13:50.211 "state": "enabled", 00:13:50.211 "thread": "nvmf_tgt_poll_group_000" 00:13:50.211 } 00:13:50.211 ]' 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.211 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.468 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:13:51.033 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.033 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:51.033 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.033 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.033 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.033 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.033 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:51.033 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:51.292 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:51.551 00:13:51.551 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.551 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.551 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.809 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.809 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.809 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.809 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.809 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.809 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.809 { 00:13:51.809 "auth": { 00:13:51.809 "dhgroup": "ffdhe6144", 00:13:51.809 "digest": "sha512", 00:13:51.809 "state": "completed" 00:13:51.809 }, 00:13:51.809 "cntlid": 135, 00:13:51.809 "listen_address": { 00:13:51.809 "adrfam": "IPv4", 00:13:51.809 "traddr": "10.0.0.2", 00:13:51.809 "trsvcid": "4420", 00:13:51.809 "trtype": "TCP" 00:13:51.809 }, 00:13:51.809 "peer_address": { 00:13:51.809 "adrfam": "IPv4", 00:13:51.809 "traddr": "10.0.0.1", 00:13:51.809 "trsvcid": "34366", 00:13:51.809 "trtype": "TCP" 00:13:51.809 }, 00:13:51.809 "qid": 0, 00:13:51.809 "state": "enabled", 00:13:51.809 "thread": "nvmf_tgt_poll_group_000" 00:13:51.809 } 00:13:51.809 ]' 00:13:51.809 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.809 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.809 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.066 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:52.066 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.066 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.066 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.066 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.324 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.890 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.457 00:13:53.457 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.457 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.457 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.715 { 00:13:53.715 "auth": { 00:13:53.715 "dhgroup": "ffdhe8192", 00:13:53.715 "digest": "sha512", 00:13:53.715 "state": "completed" 00:13:53.715 }, 00:13:53.715 "cntlid": 137, 00:13:53.715 "listen_address": { 00:13:53.715 "adrfam": "IPv4", 00:13:53.715 "traddr": "10.0.0.2", 00:13:53.715 "trsvcid": "4420", 00:13:53.715 "trtype": "TCP" 00:13:53.715 }, 00:13:53.715 "peer_address": { 00:13:53.715 "adrfam": "IPv4", 00:13:53.715 "traddr": "10.0.0.1", 00:13:53.715 "trsvcid": "34388", 00:13:53.715 "trtype": "TCP" 00:13:53.715 }, 00:13:53.715 "qid": 0, 00:13:53.715 "state": "enabled", 00:13:53.715 "thread": "nvmf_tgt_poll_group_000" 00:13:53.715 } 00:13:53.715 ]' 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:53.715 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.973 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.973 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.973 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.973 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:13:54.539 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.539 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:54.539 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.539 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.539 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.539 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.539 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:54.539 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.797 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.363 00:13:55.363 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.363 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.363 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.622 { 00:13:55.622 "auth": { 00:13:55.622 "dhgroup": "ffdhe8192", 00:13:55.622 "digest": "sha512", 00:13:55.622 "state": "completed" 00:13:55.622 }, 00:13:55.622 "cntlid": 139, 00:13:55.622 "listen_address": { 00:13:55.622 "adrfam": "IPv4", 00:13:55.622 "traddr": "10.0.0.2", 00:13:55.622 "trsvcid": "4420", 00:13:55.622 "trtype": "TCP" 00:13:55.622 }, 00:13:55.622 "peer_address": { 00:13:55.622 "adrfam": "IPv4", 00:13:55.622 "traddr": "10.0.0.1", 00:13:55.622 "trsvcid": "34424", 00:13:55.622 "trtype": "TCP" 00:13:55.622 }, 00:13:55.622 "qid": 0, 00:13:55.622 "state": "enabled", 00:13:55.622 "thread": "nvmf_tgt_poll_group_000" 00:13:55.622 } 00:13:55.622 ]' 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.622 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.881 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:01:ZjlkNGIwMTQ3OTcyOWI3MWM0ODVkYjc4MjBjMzJiZjZeZGuM: --dhchap-ctrl-secret DHHC-1:02:ZGEyM2ZlOTg3NmNiMDk0ZDdhNGVhMjE3MDNkOTlkMmMxNGRhNWI1N2Y5NTViOTky87+8Nw==: 00:13:56.458 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.458 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:56.458 18:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.458 18:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.458 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.458 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.458 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:56.458 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.717 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.284 00:13:57.284 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.284 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.284 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.544 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.544 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.544 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.544 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.544 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.544 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.544 { 00:13:57.544 "auth": { 00:13:57.544 "dhgroup": "ffdhe8192", 00:13:57.544 "digest": "sha512", 00:13:57.544 "state": "completed" 00:13:57.544 }, 00:13:57.544 "cntlid": 141, 00:13:57.544 "listen_address": { 00:13:57.544 "adrfam": "IPv4", 00:13:57.544 "traddr": "10.0.0.2", 00:13:57.544 "trsvcid": "4420", 00:13:57.544 "trtype": "TCP" 00:13:57.544 }, 00:13:57.544 "peer_address": { 00:13:57.544 "adrfam": "IPv4", 00:13:57.544 "traddr": "10.0.0.1", 00:13:57.544 "trsvcid": "34448", 00:13:57.544 "trtype": "TCP" 00:13:57.544 }, 00:13:57.544 "qid": 0, 00:13:57.544 "state": "enabled", 00:13:57.544 "thread": "nvmf_tgt_poll_group_000" 00:13:57.544 } 00:13:57.544 ]' 00:13:57.544 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.544 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:57.544 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.544 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:57.544 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.544 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.544 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.544 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.802 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:02:Njg1MTJjZGU5NjQzZTdkOWUwOTFkYzZkMDczZTgxMjhhMzIxNGFlNDQ3NTRmMzk3fr04jw==: --dhchap-ctrl-secret DHHC-1:01:NDE4MTg3NjkzZjFkMjE3OWIzZmY3ZTRhMjgwYTY4YzEWyQf7: 00:13:58.390 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.391 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:13:58.391 18:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.391 18:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.391 18:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.391 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.391 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:58.391 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:58.649 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:59.215 00:13:59.215 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.215 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.215 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.473 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.473 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.473 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.473 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.473 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.474 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.474 { 00:13:59.474 "auth": { 00:13:59.474 "dhgroup": "ffdhe8192", 00:13:59.474 "digest": "sha512", 00:13:59.474 "state": "completed" 00:13:59.474 }, 00:13:59.474 "cntlid": 143, 00:13:59.474 "listen_address": { 00:13:59.474 "adrfam": "IPv4", 00:13:59.474 "traddr": "10.0.0.2", 00:13:59.474 "trsvcid": "4420", 00:13:59.474 "trtype": "TCP" 00:13:59.474 }, 00:13:59.474 "peer_address": { 00:13:59.474 "adrfam": "IPv4", 00:13:59.474 "traddr": "10.0.0.1", 00:13:59.474 "trsvcid": "34490", 00:13:59.474 "trtype": "TCP" 00:13:59.474 }, 00:13:59.474 "qid": 0, 00:13:59.474 "state": "enabled", 00:13:59.474 "thread": "nvmf_tgt_poll_group_000" 00:13:59.474 } 00:13:59.474 ]' 00:13:59.474 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.474 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.474 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.474 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:59.474 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.474 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.474 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.474 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.733 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:00.303 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.562 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.125 00:14:01.125 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.125 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.125 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.384 { 00:14:01.384 "auth": { 00:14:01.384 "dhgroup": "ffdhe8192", 00:14:01.384 "digest": "sha512", 00:14:01.384 "state": "completed" 00:14:01.384 }, 00:14:01.384 "cntlid": 145, 00:14:01.384 "listen_address": { 00:14:01.384 "adrfam": "IPv4", 00:14:01.384 "traddr": "10.0.0.2", 00:14:01.384 "trsvcid": "4420", 00:14:01.384 "trtype": "TCP" 00:14:01.384 }, 00:14:01.384 "peer_address": { 00:14:01.384 "adrfam": "IPv4", 00:14:01.384 "traddr": "10.0.0.1", 00:14:01.384 "trsvcid": "34516", 00:14:01.384 "trtype": "TCP" 00:14:01.384 }, 00:14:01.384 "qid": 0, 00:14:01.384 "state": "enabled", 00:14:01.384 "thread": "nvmf_tgt_poll_group_000" 00:14:01.384 } 00:14:01.384 ]' 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.384 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.673 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:00:OTdlNjAyOWFlOWU5YjYwNjUyMjcxY2MzOTk1YTIzNTFkZjg3MGM1YmI1ZjJkZWVjWYQ0Hg==: --dhchap-ctrl-secret DHHC-1:03:YmY3YzY0MWI1NWI1NTNlM2VkNmY3MDliZDAzOTUxNWFjMDRlNjU4Y2YxYTE4NzZjYTlmYmM4NmYxNzNmN2ZkNngQnKU=: 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:02.265 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:02.834 2024/07/15 18:33:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:02.834 request: 00:14:02.834 { 00:14:02.834 "method": "bdev_nvme_attach_controller", 00:14:02.834 "params": { 00:14:02.834 "name": "nvme0", 00:14:02.834 "trtype": "tcp", 00:14:02.834 "traddr": "10.0.0.2", 00:14:02.834 "adrfam": "ipv4", 00:14:02.834 "trsvcid": "4420", 00:14:02.834 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:02.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6", 00:14:02.834 "prchk_reftag": false, 00:14:02.834 "prchk_guard": false, 00:14:02.834 "hdgst": false, 00:14:02.834 "ddgst": false, 00:14:02.835 "dhchap_key": "key2" 00:14:02.835 } 00:14:02.835 } 00:14:02.835 Got JSON-RPC error response 00:14:02.835 GoRPCClient: error on JSON-RPC call 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:02.835 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:03.403 2024/07/15 18:33:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:03.403 request: 00:14:03.403 { 00:14:03.403 "method": "bdev_nvme_attach_controller", 00:14:03.403 "params": { 00:14:03.403 "name": "nvme0", 00:14:03.403 "trtype": "tcp", 00:14:03.403 "traddr": "10.0.0.2", 00:14:03.403 "adrfam": "ipv4", 00:14:03.403 "trsvcid": "4420", 00:14:03.403 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:03.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6", 00:14:03.403 "prchk_reftag": false, 00:14:03.403 "prchk_guard": false, 00:14:03.403 "hdgst": false, 00:14:03.403 "ddgst": false, 00:14:03.403 "dhchap_key": "key1", 00:14:03.403 "dhchap_ctrlr_key": "ckey2" 00:14:03.403 } 00:14:03.403 } 00:14:03.403 Got JSON-RPC error response 00:14:03.403 GoRPCClient: error on JSON-RPC call 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key1 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.403 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.661 2024/07/15 18:33:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:03.661 request: 00:14:03.661 { 00:14:03.661 "method": "bdev_nvme_attach_controller", 00:14:03.661 "params": { 00:14:03.661 "name": "nvme0", 00:14:03.661 "trtype": "tcp", 00:14:03.661 "traddr": "10.0.0.2", 00:14:03.661 "adrfam": "ipv4", 00:14:03.661 "trsvcid": "4420", 00:14:03.661 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:03.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6", 00:14:03.661 "prchk_reftag": false, 00:14:03.661 "prchk_guard": false, 00:14:03.661 "hdgst": false, 00:14:03.661 "ddgst": false, 00:14:03.661 "dhchap_key": "key1", 00:14:03.661 "dhchap_ctrlr_key": "ckey1" 00:14:03.661 } 00:14:03.661 } 00:14:03.661 Got JSON-RPC error response 00:14:03.661 GoRPCClient: error on JSON-RPC call 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77762 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77762 ']' 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77762 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77762 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:03.919 killing process with pid 77762 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77762' 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77762 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77762 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82348 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82348 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82348 ']' 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.919 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82348 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82348 ']' 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.855 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.134 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.134 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:05.134 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:14:05.134 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.134 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:05.393 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:05.960 00:14:05.960 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.960 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.960 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.961 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.961 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.961 18:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.961 18:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.961 18:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.961 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.961 { 00:14:05.961 "auth": { 00:14:05.961 "dhgroup": "ffdhe8192", 00:14:05.961 "digest": "sha512", 00:14:05.961 "state": "completed" 00:14:05.961 }, 00:14:05.961 "cntlid": 1, 00:14:05.961 "listen_address": { 00:14:05.961 "adrfam": "IPv4", 00:14:05.961 "traddr": "10.0.0.2", 00:14:05.961 "trsvcid": "4420", 00:14:05.961 "trtype": "TCP" 00:14:05.961 }, 00:14:05.961 "peer_address": { 00:14:05.961 "adrfam": "IPv4", 00:14:05.961 "traddr": "10.0.0.1", 00:14:05.961 "trsvcid": "38394", 00:14:05.961 "trtype": "TCP" 00:14:05.961 }, 00:14:05.961 "qid": 0, 00:14:05.961 "state": "enabled", 00:14:05.961 "thread": "nvmf_tgt_poll_group_000" 00:14:05.961 } 00:14:05.961 ]' 00:14:05.961 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.219 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.219 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.219 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:06.219 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.219 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.219 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.219 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.478 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-secret DHHC-1:03:ZGUxN2MzNTZkY2QyNjA1Y2U4MTE2ODNjYWJjNGMzNTBlMjNhY2MwOWZiNjAxMjk2ZmQzMjk2YTRmMWZhMzNjNojQFFw=: 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --dhchap-key key3 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.046 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.305 2024/07/15 18:33:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:07.305 request: 00:14:07.305 { 00:14:07.305 "method": "bdev_nvme_attach_controller", 00:14:07.305 "params": { 00:14:07.305 "name": "nvme0", 00:14:07.305 "trtype": "tcp", 00:14:07.305 "traddr": "10.0.0.2", 00:14:07.305 "adrfam": "ipv4", 00:14:07.305 "trsvcid": "4420", 00:14:07.305 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:07.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6", 00:14:07.305 "prchk_reftag": false, 00:14:07.305 "prchk_guard": false, 00:14:07.305 "hdgst": false, 00:14:07.305 "ddgst": false, 00:14:07.305 "dhchap_key": "key3" 00:14:07.305 } 00:14:07.305 } 00:14:07.305 Got JSON-RPC error response 00:14:07.305 GoRPCClient: error on JSON-RPC call 00:14:07.305 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:07.305 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.305 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.305 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.305 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:14:07.305 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:14:07.305 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:07.305 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:07.564 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.564 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:07.564 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.564 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:07.564 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.564 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:07.564 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.564 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.564 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.824 request: 00:14:07.824 { 00:14:07.824 "method": "bdev_nvme_attach_controller", 00:14:07.824 "params": { 00:14:07.824 "name": "nvme0", 00:14:07.824 "trtype": "tcp", 00:14:07.824 "traddr": "10.0.0.2", 00:14:07.824 "adrfam": "ipv4", 00:14:07.824 "trsvcid": "4420", 00:14:07.824 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:07.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6", 00:14:07.824 "prchk_reftag": false, 00:14:07.824 "prchk_guard": false, 00:14:07.824 "hdgst": false, 00:14:07.824 "ddgst": false, 00:14:07.824 "dhchap_key": "key3" 00:14:07.824 } 00:14:07.824 } 00:14:07.824 Got JSON-RPC error response 00:14:07.824 GoRPCClient: error on JSON-RPC call 00:14:07.824 2024/07/15 18:33:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:07.824 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:07.824 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.824 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.824 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.824 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:07.824 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:14:07.824 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:07.824 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:07.824 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:07.824 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:08.083 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:08.083 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.083 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.083 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.083 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:08.083 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.083 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.083 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.083 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:08.083 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:08.084 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:08.084 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:08.084 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.084 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:08.084 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.084 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:08.084 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:08.343 2024/07/15 18:33:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:08.343 request: 00:14:08.343 { 00:14:08.343 "method": "bdev_nvme_attach_controller", 00:14:08.343 "params": { 00:14:08.343 "name": "nvme0", 00:14:08.343 "trtype": "tcp", 00:14:08.343 "traddr": "10.0.0.2", 00:14:08.343 "adrfam": "ipv4", 00:14:08.343 "trsvcid": "4420", 00:14:08.343 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:08.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6", 00:14:08.343 "prchk_reftag": false, 00:14:08.343 "prchk_guard": false, 00:14:08.343 "hdgst": false, 00:14:08.343 "ddgst": false, 00:14:08.343 "dhchap_key": "key0", 00:14:08.343 "dhchap_ctrlr_key": "key1" 00:14:08.343 } 00:14:08.343 } 00:14:08.343 Got JSON-RPC error response 00:14:08.343 GoRPCClient: error on JSON-RPC call 00:14:08.343 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:08.343 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:08.343 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:08.343 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:08.343 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:08.343 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:08.602 00:14:08.602 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:14:08.602 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:14:08.602 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.862 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.862 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.862 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77810 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77810 ']' 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77810 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77810 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:09.121 killing process with pid 77810 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77810' 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77810 00:14:09.121 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77810 00:14:09.380 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:09.380 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.380 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:14:09.380 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.380 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:14:09.380 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.380 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.380 rmmod nvme_tcp 00:14:09.380 rmmod nvme_fabrics 00:14:09.380 rmmod nvme_keyring 00:14:09.381 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.381 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:14:09.381 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:14:09.381 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82348 ']' 00:14:09.381 18:33:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82348 00:14:09.381 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82348 ']' 00:14:09.381 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82348 00:14:09.381 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:09.381 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:09.381 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82348 00:14:09.640 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:09.640 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:09.640 killing process with pid 82348 00:14:09.640 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82348' 00:14:09.640 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82348 00:14:09.640 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82348 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pSQ /tmp/spdk.key-sha256.1Uk /tmp/spdk.key-sha384.Sfg /tmp/spdk.key-sha512.PGA /tmp/spdk.key-sha512.GcK /tmp/spdk.key-sha384.eoz /tmp/spdk.key-sha256.dZa '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:09.640 ************************************ 00:14:09.640 END TEST nvmf_auth_target 00:14:09.640 ************************************ 00:14:09.640 00:14:09.640 real 2m19.610s 00:14:09.640 user 5m26.866s 00:14:09.640 sys 0m26.100s 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:09.640 18:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.899 18:33:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:09.899 18:33:32 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:14:09.899 18:33:32 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:09.899 18:33:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:09.899 18:33:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.899 18:33:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.899 ************************************ 00:14:09.899 START TEST nvmf_bdevio_no_huge 00:14:09.899 ************************************ 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:09.899 * Looking for test storage... 00:14:09.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.899 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:09.900 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:10.159 Cannot find device "nvmf_tgt_br" 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.159 Cannot find device "nvmf_tgt_br2" 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:10.159 Cannot find device "nvmf_tgt_br" 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:10.159 Cannot find device "nvmf_tgt_br2" 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:10.159 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:10.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:14:10.418 00:14:10.418 --- 10.0.0.2 ping statistics --- 00:14:10.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.418 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:10.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:10.418 00:14:10.418 --- 10.0.0.3 ping statistics --- 00:14:10.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.418 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:14:10.418 00:14:10.418 --- 10.0.0.1 ping statistics --- 00:14:10.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.418 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=82746 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 82746 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 82746 ']' 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.418 18:33:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:10.418 [2024-07-15 18:33:32.905679] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:14:10.418 [2024-07-15 18:33:32.906130] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:10.677 [2024-07-15 18:33:33.045942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.677 [2024-07-15 18:33:33.167681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.677 [2024-07-15 18:33:33.167733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.677 [2024-07-15 18:33:33.167742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.677 [2024-07-15 18:33:33.167767] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.677 [2024-07-15 18:33:33.167775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.677 [2024-07-15 18:33:33.168849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:10.677 [2024-07-15 18:33:33.169019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:10.677 [2024-07-15 18:33:33.169059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:10.677 [2024-07-15 18:33:33.169064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:11.273 [2024-07-15 18:33:33.796739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:11.273 Malloc0 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:11.273 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:11.274 [2024-07-15 18:33:33.834665] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:11.274 { 00:14:11.274 "params": { 00:14:11.274 "name": "Nvme$subsystem", 00:14:11.274 "trtype": "$TEST_TRANSPORT", 00:14:11.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.274 "adrfam": "ipv4", 00:14:11.274 "trsvcid": "$NVMF_PORT", 00:14:11.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.274 "hdgst": ${hdgst:-false}, 00:14:11.274 "ddgst": ${ddgst:-false} 00:14:11.274 }, 00:14:11.274 "method": "bdev_nvme_attach_controller" 00:14:11.274 } 00:14:11.274 EOF 00:14:11.274 )") 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:11.274 18:33:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:11.274 "params": { 00:14:11.274 "name": "Nvme1", 00:14:11.274 "trtype": "tcp", 00:14:11.274 "traddr": "10.0.0.2", 00:14:11.274 "adrfam": "ipv4", 00:14:11.274 "trsvcid": "4420", 00:14:11.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.274 "hdgst": false, 00:14:11.274 "ddgst": false 00:14:11.274 }, 00:14:11.274 "method": "bdev_nvme_attach_controller" 00:14:11.274 }' 00:14:11.533 [2024-07-15 18:33:33.880248] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:14:11.533 [2024-07-15 18:33:33.880313] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82800 ] 00:14:11.534 [2024-07-15 18:33:34.017716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.792 [2024-07-15 18:33:34.183347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.792 [2024-07-15 18:33:34.183469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.792 [2024-07-15 18:33:34.183471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.792 I/O targets: 00:14:11.792 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:11.792 00:14:11.792 00:14:11.792 CUnit - A unit testing framework for C - Version 2.1-3 00:14:11.792 http://cunit.sourceforge.net/ 00:14:11.792 00:14:11.792 00:14:11.792 Suite: bdevio tests on: Nvme1n1 00:14:12.051 Test: blockdev write read block ...passed 00:14:12.051 Test: blockdev write zeroes read block ...passed 00:14:12.051 Test: blockdev write zeroes read no split ...passed 00:14:12.051 Test: blockdev write zeroes read split ...passed 00:14:12.051 Test: blockdev write zeroes read split partial ...passed 00:14:12.051 Test: blockdev reset ...[2024-07-15 18:33:34.534171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:12.051 [2024-07-15 18:33:34.534271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f06460 (9): Bad file descriptor 00:14:12.051 [2024-07-15 18:33:34.546239] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:12.051 passed 00:14:12.051 Test: blockdev write read 8 blocks ...passed 00:14:12.051 Test: blockdev write read size > 128k ...passed 00:14:12.051 Test: blockdev write read invalid size ...passed 00:14:12.051 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:12.051 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:12.051 Test: blockdev write read max offset ...passed 00:14:12.310 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:12.310 Test: blockdev writev readv 8 blocks ...passed 00:14:12.310 Test: blockdev writev readv 30 x 1block ...passed 00:14:12.310 Test: blockdev writev readv block ...passed 00:14:12.310 Test: blockdev writev readv size > 128k ...passed 00:14:12.310 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:12.310 Test: blockdev comparev and writev ...[2024-07-15 18:33:34.720547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.310 [2024-07-15 18:33:34.720604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:12.310 [2024-07-15 18:33:34.720628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.310 [2024-07-15 18:33:34.720643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:12.310 [2024-07-15 18:33:34.721053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.310 [2024-07-15 18:33:34.721100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:12.310 [2024-07-15 18:33:34.721123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.310 [2024-07-15 18:33:34.721138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:12.310 [2024-07-15 18:33:34.721517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.310 [2024-07-15 18:33:34.721558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:12.310 [2024-07-15 18:33:34.721598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.310 [2024-07-15 18:33:34.721613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:12.310 [2024-07-15 18:33:34.721977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.310 [2024-07-15 18:33:34.722017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:12.310 [2024-07-15 18:33:34.722039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.310 [2024-07-15 18:33:34.722053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:12.310 passed 00:14:12.310 Test: blockdev nvme passthru rw ...passed 00:14:12.310 Test: blockdev nvme passthru vendor specific ...[2024-07-15 18:33:34.805970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.310 [2024-07-15 18:33:34.806025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:12.311 [2024-07-15 18:33:34.806272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.311 [2024-07-15 18:33:34.806300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:12.311 [2024-07-15 18:33:34.806437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.311 [2024-07-15 18:33:34.806461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:12.311 [2024-07-15 18:33:34.806631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.311 [2024-07-15 18:33:34.806656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:12.311 passed 00:14:12.311 Test: blockdev nvme admin passthru ...passed 00:14:12.311 Test: blockdev copy ...passed 00:14:12.311 00:14:12.311 Run Summary: Type Total Ran Passed Failed Inactive 00:14:12.311 suites 1 1 n/a 0 0 00:14:12.311 tests 23 23 23 0 0 00:14:12.311 asserts 152 152 152 0 n/a 00:14:12.311 00:14:12.311 Elapsed time = 0.948 seconds 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.877 rmmod nvme_tcp 00:14:12.877 rmmod nvme_fabrics 00:14:12.877 rmmod nvme_keyring 00:14:12.877 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 82746 ']' 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 82746 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 82746 ']' 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 82746 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82746 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:12.878 killing process with pid 82746 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82746' 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 82746 00:14:12.878 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 82746 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:13.444 00:14:13.444 real 0m3.519s 00:14:13.444 user 0m12.259s 00:14:13.444 sys 0m1.460s 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:13.444 ************************************ 00:14:13.444 END TEST nvmf_bdevio_no_huge 00:14:13.444 ************************************ 00:14:13.444 18:33:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.444 18:33:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:13.444 18:33:35 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:13.444 18:33:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:13.444 18:33:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.444 18:33:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.444 ************************************ 00:14:13.444 START TEST nvmf_tls 00:14:13.444 ************************************ 00:14:13.444 18:33:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:13.444 * Looking for test storage... 00:14:13.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:13.444 18:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:13.444 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:13.444 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.444 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.444 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.444 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.445 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.445 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.445 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.445 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.445 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.445 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.445 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:13.445 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:14:13.445 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.702 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.702 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:13.702 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.702 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:13.702 18:33:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.702 18:33:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.702 18:33:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.702 18:33:36 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:13.703 Cannot find device "nvmf_tgt_br" 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:13.703 Cannot find device "nvmf_tgt_br2" 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:13.703 Cannot find device "nvmf_tgt_br" 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:13.703 Cannot find device "nvmf_tgt_br2" 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:13.703 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:13.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:14:13.961 00:14:13.961 --- 10.0.0.2 ping statistics --- 00:14:13.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.961 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:13.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:13.961 00:14:13.961 --- 10.0.0.3 ping statistics --- 00:14:13.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.961 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:13.961 00:14:13.961 --- 10.0.0.1 ping statistics --- 00:14:13.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.961 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:13.961 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=82991 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 82991 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 82991 ']' 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.962 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.962 [2024-07-15 18:33:36.559505] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:14:13.962 [2024-07-15 18:33:36.559594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.219 [2024-07-15 18:33:36.703654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.219 [2024-07-15 18:33:36.787232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.219 [2024-07-15 18:33:36.787286] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.219 [2024-07-15 18:33:36.787296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.219 [2024-07-15 18:33:36.787304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.219 [2024-07-15 18:33:36.787311] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.219 [2024-07-15 18:33:36.787342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.153 18:33:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.153 18:33:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:15.153 18:33:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.153 18:33:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.153 18:33:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.153 18:33:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.153 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:15.153 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:15.153 true 00:14:15.153 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:15.153 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:15.411 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:15.411 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:15.411 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:15.669 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:15.669 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:15.928 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:15.928 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:15.928 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:15.928 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:15.928 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:16.187 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:16.187 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:16.187 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:16.187 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:16.446 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:16.446 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:16.446 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:16.704 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:16.704 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:16.962 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:16.962 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:16.963 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:16.963 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:16.963 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:17.221 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:17.479 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:17.479 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:17.479 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.tnWw9LMegD 00:14:17.479 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:17.479 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.IL7iignecZ 00:14:17.479 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:17.479 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:17.479 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.tnWw9LMegD 00:14:17.479 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.IL7iignecZ 00:14:17.479 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:17.479 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:17.738 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.tnWw9LMegD 00:14:17.738 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tnWw9LMegD 00:14:17.738 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:17.997 [2024-07-15 18:33:40.531323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.997 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:18.256 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:18.516 [2024-07-15 18:33:40.902810] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:18.516 [2024-07-15 18:33:40.903011] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.516 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:18.516 malloc0 00:14:18.516 18:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:18.774 18:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tnWw9LMegD 00:14:19.034 [2024-07-15 18:33:41.491014] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:19.034 18:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tnWw9LMegD 00:14:31.253 Initializing NVMe Controllers 00:14:31.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:31.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:31.253 Initialization complete. Launching workers. 00:14:31.253 ======================================================== 00:14:31.253 Latency(us) 00:14:31.253 Device Information : IOPS MiB/s Average min max 00:14:31.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14586.07 56.98 4388.26 925.58 15188.58 00:14:31.253 ======================================================== 00:14:31.253 Total : 14586.07 56.98 4388.26 925.58 15188.58 00:14:31.253 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tnWw9LMegD 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tnWw9LMegD' 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83340 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83340 /var/tmp/bdevperf.sock 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83340 ']' 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:31.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.253 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.253 [2024-07-15 18:33:51.739503] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:14:31.253 [2024-07-15 18:33:51.739598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83340 ] 00:14:31.253 [2024-07-15 18:33:51.883280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.253 [2024-07-15 18:33:51.975619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.253 18:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.253 18:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:31.253 18:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tnWw9LMegD 00:14:31.253 [2024-07-15 18:33:52.852171] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:31.253 [2024-07-15 18:33:52.852271] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:31.253 TLSTESTn1 00:14:31.253 18:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:31.253 Running I/O for 10 seconds... 00:14:41.232 00:14:41.232 Latency(us) 00:14:41.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.232 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:41.232 Verification LBA range: start 0x0 length 0x2000 00:14:41.232 TLSTESTn1 : 10.01 5675.93 22.17 0.00 0.00 22515.54 5369.21 20634.63 00:14:41.232 =================================================================================================================== 00:14:41.232 Total : 5675.93 22.17 0.00 0.00 22515.54 5369.21 20634.63 00:14:41.232 0 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83340 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83340 ']' 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83340 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83340 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:41.232 killing process with pid 83340 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83340' 00:14:41.232 Received shutdown signal, test time was about 10.000000 seconds 00:14:41.232 00:14:41.232 Latency(us) 00:14:41.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.232 =================================================================================================================== 00:14:41.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83340 00:14:41.232 [2024-07-15 18:34:03.087222] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83340 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IL7iignecZ 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IL7iignecZ 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IL7iignecZ 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IL7iignecZ' 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83492 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83492 /var/tmp/bdevperf.sock 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83492 ']' 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.232 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.233 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.233 [2024-07-15 18:34:03.331942] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:14:41.233 [2024-07-15 18:34:03.332025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83492 ] 00:14:41.233 [2024-07-15 18:34:03.472534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.233 [2024-07-15 18:34:03.558479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.800 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.800 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:41.800 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IL7iignecZ 00:14:42.059 [2024-07-15 18:34:04.428204] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.059 [2024-07-15 18:34:04.428322] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:42.059 [2024-07-15 18:34:04.432826] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:42.059 [2024-07-15 18:34:04.433481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde3ca0 (107): Transport endpoint is not connected 00:14:42.059 [2024-07-15 18:34:04.434469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde3ca0 (9): Bad file descriptor 00:14:42.059 [2024-07-15 18:34:04.435465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:42.059 [2024-07-15 18:34:04.435491] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:42.059 [2024-07-15 18:34:04.435503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:42.059 2024/07/15 18:34:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.IL7iignecZ subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:42.059 request: 00:14:42.059 { 00:14:42.059 "method": "bdev_nvme_attach_controller", 00:14:42.059 "params": { 00:14:42.059 "name": "TLSTEST", 00:14:42.059 "trtype": "tcp", 00:14:42.059 "traddr": "10.0.0.2", 00:14:42.059 "adrfam": "ipv4", 00:14:42.059 "trsvcid": "4420", 00:14:42.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.059 "prchk_reftag": false, 00:14:42.059 "prchk_guard": false, 00:14:42.059 "hdgst": false, 00:14:42.059 "ddgst": false, 00:14:42.059 "psk": "/tmp/tmp.IL7iignecZ" 00:14:42.059 } 00:14:42.059 } 00:14:42.059 Got JSON-RPC error response 00:14:42.059 GoRPCClient: error on JSON-RPC call 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83492 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83492 ']' 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83492 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83492 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83492' 00:14:42.059 killing process with pid 83492 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83492 00:14:42.059 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.059 00:14:42.059 Latency(us) 00:14:42.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.059 =================================================================================================================== 00:14:42.059 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:42.059 [2024-07-15 18:34:04.493453] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:42.059 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83492 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tnWw9LMegD 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tnWw9LMegD 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tnWw9LMegD 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tnWw9LMegD' 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83532 00:14:42.318 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.319 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.319 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83532 /var/tmp/bdevperf.sock 00:14:42.319 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83532 ']' 00:14:42.319 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.319 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.319 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.319 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.319 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.319 [2024-07-15 18:34:04.732976] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:14:42.319 [2024-07-15 18:34:04.733054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83532 ] 00:14:42.319 [2024-07-15 18:34:04.875760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.577 [2024-07-15 18:34:04.975200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.144 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.144 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:43.144 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.tnWw9LMegD 00:14:43.403 [2024-07-15 18:34:05.781496] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.403 [2024-07-15 18:34:05.781605] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:43.403 [2024-07-15 18:34:05.789202] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:43.403 [2024-07-15 18:34:05.789241] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:43.403 [2024-07-15 18:34:05.789287] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:43.403 [2024-07-15 18:34:05.789861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204dca0 (107): Transport endpoint is not connected 00:14:43.403 [2024-07-15 18:34:05.790848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204dca0 (9): Bad file descriptor 00:14:43.403 [2024-07-15 18:34:05.791845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:43.403 [2024-07-15 18:34:05.791866] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:43.403 [2024-07-15 18:34:05.791878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:43.403 2024/07/15 18:34:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.tnWw9LMegD subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:43.403 request: 00:14:43.403 { 00:14:43.403 "method": "bdev_nvme_attach_controller", 00:14:43.403 "params": { 00:14:43.403 "name": "TLSTEST", 00:14:43.403 "trtype": "tcp", 00:14:43.403 "traddr": "10.0.0.2", 00:14:43.403 "adrfam": "ipv4", 00:14:43.403 "trsvcid": "4420", 00:14:43.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.403 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:43.403 "prchk_reftag": false, 00:14:43.403 "prchk_guard": false, 00:14:43.403 "hdgst": false, 00:14:43.403 "ddgst": false, 00:14:43.403 "psk": "/tmp/tmp.tnWw9LMegD" 00:14:43.403 } 00:14:43.403 } 00:14:43.403 Got JSON-RPC error response 00:14:43.403 GoRPCClient: error on JSON-RPC call 00:14:43.403 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83532 00:14:43.403 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83532 ']' 00:14:43.403 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83532 00:14:43.403 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:43.403 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.403 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83532 00:14:43.403 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:43.403 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:43.403 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83532' 00:14:43.403 killing process with pid 83532 00:14:43.403 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83532 00:14:43.403 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.403 00:14:43.404 Latency(us) 00:14:43.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.404 =================================================================================================================== 00:14:43.404 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.404 [2024-07-15 18:34:05.844078] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:43.404 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83532 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tnWw9LMegD 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tnWw9LMegD 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tnWw9LMegD 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tnWw9LMegD' 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83572 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83572 /var/tmp/bdevperf.sock 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83572 ']' 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.663 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.664 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.664 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.664 [2024-07-15 18:34:06.084385] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:14:43.664 [2024-07-15 18:34:06.084900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83572 ] 00:14:43.664 [2024-07-15 18:34:06.228416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.922 [2024-07-15 18:34:06.324449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.489 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.489 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:44.489 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tnWw9LMegD 00:14:44.748 [2024-07-15 18:34:07.104159] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:44.748 [2024-07-15 18:34:07.104251] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:44.748 [2024-07-15 18:34:07.108549] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:44.748 [2024-07-15 18:34:07.108596] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:44.748 [2024-07-15 18:34:07.108645] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:44.748 [2024-07-15 18:34:07.109302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67aca0 (107): Transport endpoint is not connected 00:14:44.748 [2024-07-15 18:34:07.110287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67aca0 (9): Bad file descriptor 00:14:44.748 [2024-07-15 18:34:07.111284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:44.748 [2024-07-15 18:34:07.111303] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:44.748 [2024-07-15 18:34:07.111315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:44.748 2024/07/15 18:34:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.tnWw9LMegD subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:44.748 request: 00:14:44.748 { 00:14:44.748 "method": "bdev_nvme_attach_controller", 00:14:44.748 "params": { 00:14:44.748 "name": "TLSTEST", 00:14:44.748 "trtype": "tcp", 00:14:44.748 "traddr": "10.0.0.2", 00:14:44.748 "adrfam": "ipv4", 00:14:44.748 "trsvcid": "4420", 00:14:44.748 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:44.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.748 "prchk_reftag": false, 00:14:44.748 "prchk_guard": false, 00:14:44.748 "hdgst": false, 00:14:44.748 "ddgst": false, 00:14:44.748 "psk": "/tmp/tmp.tnWw9LMegD" 00:14:44.748 } 00:14:44.748 } 00:14:44.748 Got JSON-RPC error response 00:14:44.748 GoRPCClient: error on JSON-RPC call 00:14:44.748 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83572 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83572 ']' 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83572 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83572 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:44.749 killing process with pid 83572 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83572' 00:14:44.749 Received shutdown signal, test time was about 10.000000 seconds 00:14:44.749 00:14:44.749 Latency(us) 00:14:44.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.749 =================================================================================================================== 00:14:44.749 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83572 00:14:44.749 [2024-07-15 18:34:07.165098] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83572 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83622 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83622 /var/tmp/bdevperf.sock 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83622 ']' 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.749 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.007 [2024-07-15 18:34:07.406696] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:14:45.007 [2024-07-15 18:34:07.406773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83622 ] 00:14:45.007 [2024-07-15 18:34:07.546992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.266 [2024-07-15 18:34:07.635021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.855 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.855 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:45.855 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:45.855 [2024-07-15 18:34:08.465225] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:45.855 [2024-07-15 18:34:08.467306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bd240 (9): Bad file descriptor 00:14:45.855 [2024-07-15 18:34:08.468298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:45.855 [2024-07-15 18:34:08.468320] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:45.855 [2024-07-15 18:34:08.468332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:46.113 2024/07/15 18:34:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:46.113 request: 00:14:46.113 { 00:14:46.113 "method": "bdev_nvme_attach_controller", 00:14:46.113 "params": { 00:14:46.113 "name": "TLSTEST", 00:14:46.113 "trtype": "tcp", 00:14:46.113 "traddr": "10.0.0.2", 00:14:46.113 "adrfam": "ipv4", 00:14:46.113 "trsvcid": "4420", 00:14:46.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.113 "prchk_reftag": false, 00:14:46.113 "prchk_guard": false, 00:14:46.113 "hdgst": false, 00:14:46.113 "ddgst": false 00:14:46.113 } 00:14:46.113 } 00:14:46.113 Got JSON-RPC error response 00:14:46.113 GoRPCClient: error on JSON-RPC call 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83622 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83622 ']' 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83622 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83622 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83622' 00:14:46.113 killing process with pid 83622 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83622 00:14:46.113 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.113 00:14:46.113 Latency(us) 00:14:46.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.113 =================================================================================================================== 00:14:46.113 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83622 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 82991 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 82991 ']' 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 82991 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.113 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82991 00:14:46.371 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:46.371 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:46.371 killing process with pid 82991 00:14:46.371 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82991' 00:14:46.372 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 82991 00:14:46.372 [2024-07-15 18:34:08.749298] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:46.372 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 82991 00:14:46.372 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:46.372 18:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:46.372 18:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:46.372 18:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:46.372 18:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:46.372 18:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:46.372 18:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:46.630 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:46.630 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Ukncj92f3M 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Ukncj92f3M 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83673 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83673 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83673 ']' 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.630 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.630 [2024-07-15 18:34:09.071056] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:14:46.630 [2024-07-15 18:34:09.071144] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.630 [2024-07-15 18:34:09.212681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.889 [2024-07-15 18:34:09.307463] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.889 [2024-07-15 18:34:09.307518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.889 [2024-07-15 18:34:09.307528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.889 [2024-07-15 18:34:09.307536] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.889 [2024-07-15 18:34:09.307544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.889 [2024-07-15 18:34:09.307585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.456 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.456 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:47.456 18:34:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:47.456 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:47.456 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.456 18:34:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.456 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Ukncj92f3M 00:14:47.456 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ukncj92f3M 00:14:47.457 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:47.716 [2024-07-15 18:34:10.170912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.716 18:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:47.975 18:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:48.234 [2024-07-15 18:34:10.598283] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:48.234 [2024-07-15 18:34:10.598467] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.234 18:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:48.234 malloc0 00:14:48.492 18:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:48.492 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ukncj92f3M 00:14:48.750 [2024-07-15 18:34:11.258675] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ukncj92f3M 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ukncj92f3M' 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83776 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83776 /var/tmp/bdevperf.sock 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83776 ']' 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.750 18:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.750 [2024-07-15 18:34:11.336759] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:14:48.750 [2024-07-15 18:34:11.336871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83776 ] 00:14:49.047 [2024-07-15 18:34:11.465856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.047 [2024-07-15 18:34:11.561904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.614 18:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.614 18:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:49.614 18:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ukncj92f3M 00:14:49.872 [2024-07-15 18:34:12.402943] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:49.872 [2024-07-15 18:34:12.403058] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:49.872 TLSTESTn1 00:14:50.130 18:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:50.130 Running I/O for 10 seconds... 00:15:00.098 00:15:00.098 Latency(us) 00:15:00.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.098 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:00.098 Verification LBA range: start 0x0 length 0x2000 00:15:00.098 TLSTESTn1 : 10.01 5447.81 21.28 0.00 0.00 23456.34 5711.37 20634.63 00:15:00.098 =================================================================================================================== 00:15:00.098 Total : 5447.81 21.28 0.00 0.00 23456.34 5711.37 20634.63 00:15:00.098 0 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83776 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83776 ']' 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83776 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83776 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:00.098 killing process with pid 83776 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83776' 00:15:00.098 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.098 00:15:00.098 Latency(us) 00:15:00.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.098 =================================================================================================================== 00:15:00.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83776 00:15:00.098 [2024-07-15 18:34:22.660064] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:00.098 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83776 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Ukncj92f3M 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ukncj92f3M 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ukncj92f3M 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ukncj92f3M 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ukncj92f3M' 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83924 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83924 /var/tmp/bdevperf.sock 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83924 ']' 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.357 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.357 [2024-07-15 18:34:22.916673] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:00.357 [2024-07-15 18:34:22.916772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83924 ] 00:15:00.617 [2024-07-15 18:34:23.051334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.617 [2024-07-15 18:34:23.149634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.554 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.554 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:01.554 18:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ukncj92f3M 00:15:01.554 [2024-07-15 18:34:24.008544] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.555 [2024-07-15 18:34:24.008624] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:01.555 [2024-07-15 18:34:24.008634] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Ukncj92f3M 00:15:01.555 2024/07/15 18:34:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.Ukncj92f3M subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:15:01.555 request: 00:15:01.555 { 00:15:01.555 "method": "bdev_nvme_attach_controller", 00:15:01.555 "params": { 00:15:01.555 "name": "TLSTEST", 00:15:01.555 "trtype": "tcp", 00:15:01.555 "traddr": "10.0.0.2", 00:15:01.555 "adrfam": "ipv4", 00:15:01.555 "trsvcid": "4420", 00:15:01.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.555 "prchk_reftag": false, 00:15:01.555 "prchk_guard": false, 00:15:01.555 "hdgst": false, 00:15:01.555 "ddgst": false, 00:15:01.555 "psk": "/tmp/tmp.Ukncj92f3M" 00:15:01.555 } 00:15:01.555 } 00:15:01.555 Got JSON-RPC error response 00:15:01.555 GoRPCClient: error on JSON-RPC call 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83924 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83924 ']' 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83924 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83924 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:01.555 killing process with pid 83924 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83924' 00:15:01.555 Received shutdown signal, test time was about 10.000000 seconds 00:15:01.555 00:15:01.555 Latency(us) 00:15:01.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.555 =================================================================================================================== 00:15:01.555 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83924 00:15:01.555 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83924 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 83673 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83673 ']' 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83673 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83673 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:01.814 killing process with pid 83673 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83673' 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83673 00:15:01.814 [2024-07-15 18:34:24.294726] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:01.814 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83673 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83974 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83974 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83974 ']' 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.072 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.072 [2024-07-15 18:34:24.558456] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:02.072 [2024-07-15 18:34:24.558544] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.331 [2024-07-15 18:34:24.701488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.331 [2024-07-15 18:34:24.796303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.331 [2024-07-15 18:34:24.796353] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.331 [2024-07-15 18:34:24.796363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.331 [2024-07-15 18:34:24.796371] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.331 [2024-07-15 18:34:24.796379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.331 [2024-07-15 18:34:24.796404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Ukncj92f3M 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Ukncj92f3M 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Ukncj92f3M 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ukncj92f3M 00:15:02.898 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:03.157 [2024-07-15 18:34:25.672967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.157 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:03.416 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:03.676 [2024-07-15 18:34:26.080499] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:03.676 [2024-07-15 18:34:26.080688] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.676 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:03.676 malloc0 00:15:03.676 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:03.936 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ukncj92f3M 00:15:04.195 [2024-07-15 18:34:26.660519] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:04.195 [2024-07-15 18:34:26.660554] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:04.195 [2024-07-15 18:34:26.660587] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:04.195 2024/07/15 18:34:26 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.Ukncj92f3M], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:15:04.195 request: 00:15:04.195 { 00:15:04.195 "method": "nvmf_subsystem_add_host", 00:15:04.195 "params": { 00:15:04.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.195 "host": "nqn.2016-06.io.spdk:host1", 00:15:04.195 "psk": "/tmp/tmp.Ukncj92f3M" 00:15:04.195 } 00:15:04.195 } 00:15:04.195 Got JSON-RPC error response 00:15:04.195 GoRPCClient: error on JSON-RPC call 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 83974 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83974 ']' 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83974 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83974 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:04.195 killing process with pid 83974 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83974' 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83974 00:15:04.195 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83974 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Ukncj92f3M 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84085 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84085 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84085 ']' 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.455 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.455 [2024-07-15 18:34:26.979510] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:04.455 [2024-07-15 18:34:26.979600] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.713 [2024-07-15 18:34:27.109611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.713 [2024-07-15 18:34:27.199419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.713 [2024-07-15 18:34:27.199467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.713 [2024-07-15 18:34:27.199479] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.713 [2024-07-15 18:34:27.199491] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.713 [2024-07-15 18:34:27.199498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.713 [2024-07-15 18:34:27.199522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.649 18:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:05.649 18:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:05.649 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:05.649 18:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:05.649 18:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.649 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.649 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Ukncj92f3M 00:15:05.649 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ukncj92f3M 00:15:05.649 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:05.649 [2024-07-15 18:34:28.134252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.649 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:05.908 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:05.908 [2024-07-15 18:34:28.505690] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:05.908 [2024-07-15 18:34:28.505872] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.167 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:06.167 malloc0 00:15:06.167 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:06.426 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ukncj92f3M 00:15:06.685 [2024-07-15 18:34:29.065784] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:06.685 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84183 00:15:06.685 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:06.685 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:06.685 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84183 /var/tmp/bdevperf.sock 00:15:06.685 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84183 ']' 00:15:06.685 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.685 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.685 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.685 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.685 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.685 [2024-07-15 18:34:29.131722] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:06.685 [2024-07-15 18:34:29.131803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84183 ] 00:15:06.685 [2024-07-15 18:34:29.269759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.944 [2024-07-15 18:34:29.362425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.512 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.512 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:07.512 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ukncj92f3M 00:15:07.771 [2024-07-15 18:34:30.160348] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:07.771 [2024-07-15 18:34:30.160448] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:07.771 TLSTESTn1 00:15:07.771 18:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:08.030 18:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:08.030 "subsystems": [ 00:15:08.030 { 00:15:08.030 "subsystem": "keyring", 00:15:08.030 "config": [] 00:15:08.030 }, 00:15:08.030 { 00:15:08.030 "subsystem": "iobuf", 00:15:08.030 "config": [ 00:15:08.030 { 00:15:08.030 "method": "iobuf_set_options", 00:15:08.030 "params": { 00:15:08.030 "large_bufsize": 135168, 00:15:08.030 "large_pool_count": 1024, 00:15:08.030 "small_bufsize": 8192, 00:15:08.030 "small_pool_count": 8192 00:15:08.030 } 00:15:08.030 } 00:15:08.030 ] 00:15:08.030 }, 00:15:08.030 { 00:15:08.030 "subsystem": "sock", 00:15:08.030 "config": [ 00:15:08.030 { 00:15:08.030 "method": "sock_set_default_impl", 00:15:08.030 "params": { 00:15:08.030 "impl_name": "posix" 00:15:08.030 } 00:15:08.030 }, 00:15:08.030 { 00:15:08.030 "method": "sock_impl_set_options", 00:15:08.030 "params": { 00:15:08.030 "enable_ktls": false, 00:15:08.030 "enable_placement_id": 0, 00:15:08.031 "enable_quickack": false, 00:15:08.031 "enable_recv_pipe": true, 00:15:08.031 "enable_zerocopy_send_client": false, 00:15:08.031 "enable_zerocopy_send_server": true, 00:15:08.031 "impl_name": "ssl", 00:15:08.031 "recv_buf_size": 4096, 00:15:08.031 "send_buf_size": 4096, 00:15:08.031 "tls_version": 0, 00:15:08.031 "zerocopy_threshold": 0 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "sock_impl_set_options", 00:15:08.031 "params": { 00:15:08.031 "enable_ktls": false, 00:15:08.031 "enable_placement_id": 0, 00:15:08.031 "enable_quickack": false, 00:15:08.031 "enable_recv_pipe": true, 00:15:08.031 "enable_zerocopy_send_client": false, 00:15:08.031 "enable_zerocopy_send_server": true, 00:15:08.031 "impl_name": "posix", 00:15:08.031 "recv_buf_size": 2097152, 00:15:08.031 "send_buf_size": 2097152, 00:15:08.031 "tls_version": 0, 00:15:08.031 "zerocopy_threshold": 0 00:15:08.031 } 00:15:08.031 } 00:15:08.031 ] 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "subsystem": "vmd", 00:15:08.031 "config": [] 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "subsystem": "accel", 00:15:08.031 "config": [ 00:15:08.031 { 00:15:08.031 "method": "accel_set_options", 00:15:08.031 "params": { 00:15:08.031 "buf_count": 2048, 00:15:08.031 "large_cache_size": 16, 00:15:08.031 "sequence_count": 2048, 00:15:08.031 "small_cache_size": 128, 00:15:08.031 "task_count": 2048 00:15:08.031 } 00:15:08.031 } 00:15:08.031 ] 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "subsystem": "bdev", 00:15:08.031 "config": [ 00:15:08.031 { 00:15:08.031 "method": "bdev_set_options", 00:15:08.031 "params": { 00:15:08.031 "bdev_auto_examine": true, 00:15:08.031 "bdev_io_cache_size": 256, 00:15:08.031 "bdev_io_pool_size": 65535, 00:15:08.031 "iobuf_large_cache_size": 16, 00:15:08.031 "iobuf_small_cache_size": 128 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "bdev_raid_set_options", 00:15:08.031 "params": { 00:15:08.031 "process_window_size_kb": 1024 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "bdev_iscsi_set_options", 00:15:08.031 "params": { 00:15:08.031 "timeout_sec": 30 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "bdev_nvme_set_options", 00:15:08.031 "params": { 00:15:08.031 "action_on_timeout": "none", 00:15:08.031 "allow_accel_sequence": false, 00:15:08.031 "arbitration_burst": 0, 00:15:08.031 "bdev_retry_count": 3, 00:15:08.031 "ctrlr_loss_timeout_sec": 0, 00:15:08.031 "delay_cmd_submit": true, 00:15:08.031 "dhchap_dhgroups": [ 00:15:08.031 "null", 00:15:08.031 "ffdhe2048", 00:15:08.031 "ffdhe3072", 00:15:08.031 "ffdhe4096", 00:15:08.031 "ffdhe6144", 00:15:08.031 "ffdhe8192" 00:15:08.031 ], 00:15:08.031 "dhchap_digests": [ 00:15:08.031 "sha256", 00:15:08.031 "sha384", 00:15:08.031 "sha512" 00:15:08.031 ], 00:15:08.031 "disable_auto_failback": false, 00:15:08.031 "fast_io_fail_timeout_sec": 0, 00:15:08.031 "generate_uuids": false, 00:15:08.031 "high_priority_weight": 0, 00:15:08.031 "io_path_stat": false, 00:15:08.031 "io_queue_requests": 0, 00:15:08.031 "keep_alive_timeout_ms": 10000, 00:15:08.031 "low_priority_weight": 0, 00:15:08.031 "medium_priority_weight": 0, 00:15:08.031 "nvme_adminq_poll_period_us": 10000, 00:15:08.031 "nvme_error_stat": false, 00:15:08.031 "nvme_ioq_poll_period_us": 0, 00:15:08.031 "rdma_cm_event_timeout_ms": 0, 00:15:08.031 "rdma_max_cq_size": 0, 00:15:08.031 "rdma_srq_size": 0, 00:15:08.031 "reconnect_delay_sec": 0, 00:15:08.031 "timeout_admin_us": 0, 00:15:08.031 "timeout_us": 0, 00:15:08.031 "transport_ack_timeout": 0, 00:15:08.031 "transport_retry_count": 4, 00:15:08.031 "transport_tos": 0 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "bdev_nvme_set_hotplug", 00:15:08.031 "params": { 00:15:08.031 "enable": false, 00:15:08.031 "period_us": 100000 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "bdev_malloc_create", 00:15:08.031 "params": { 00:15:08.031 "block_size": 4096, 00:15:08.031 "name": "malloc0", 00:15:08.031 "num_blocks": 8192, 00:15:08.031 "optimal_io_boundary": 0, 00:15:08.031 "physical_block_size": 4096, 00:15:08.031 "uuid": "1abda181-9f33-4470-b8a5-f9f58aaceaa9" 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "bdev_wait_for_examine" 00:15:08.031 } 00:15:08.031 ] 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "subsystem": "nbd", 00:15:08.031 "config": [] 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "subsystem": "scheduler", 00:15:08.031 "config": [ 00:15:08.031 { 00:15:08.031 "method": "framework_set_scheduler", 00:15:08.031 "params": { 00:15:08.031 "name": "static" 00:15:08.031 } 00:15:08.031 } 00:15:08.031 ] 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "subsystem": "nvmf", 00:15:08.031 "config": [ 00:15:08.031 { 00:15:08.031 "method": "nvmf_set_config", 00:15:08.031 "params": { 00:15:08.031 "admin_cmd_passthru": { 00:15:08.031 "identify_ctrlr": false 00:15:08.031 }, 00:15:08.031 "discovery_filter": "match_any" 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "nvmf_set_max_subsystems", 00:15:08.031 "params": { 00:15:08.031 "max_subsystems": 1024 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "nvmf_set_crdt", 00:15:08.031 "params": { 00:15:08.031 "crdt1": 0, 00:15:08.031 "crdt2": 0, 00:15:08.031 "crdt3": 0 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "nvmf_create_transport", 00:15:08.031 "params": { 00:15:08.031 "abort_timeout_sec": 1, 00:15:08.031 "ack_timeout": 0, 00:15:08.031 "buf_cache_size": 4294967295, 00:15:08.031 "c2h_success": false, 00:15:08.031 "data_wr_pool_size": 0, 00:15:08.031 "dif_insert_or_strip": false, 00:15:08.031 "in_capsule_data_size": 4096, 00:15:08.031 "io_unit_size": 131072, 00:15:08.031 "max_aq_depth": 128, 00:15:08.031 "max_io_qpairs_per_ctrlr": 127, 00:15:08.031 "max_io_size": 131072, 00:15:08.031 "max_queue_depth": 128, 00:15:08.031 "num_shared_buffers": 511, 00:15:08.031 "sock_priority": 0, 00:15:08.031 "trtype": "TCP", 00:15:08.031 "zcopy": false 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "nvmf_create_subsystem", 00:15:08.031 "params": { 00:15:08.031 "allow_any_host": false, 00:15:08.031 "ana_reporting": false, 00:15:08.031 "max_cntlid": 65519, 00:15:08.031 "max_namespaces": 10, 00:15:08.031 "min_cntlid": 1, 00:15:08.031 "model_number": "SPDK bdev Controller", 00:15:08.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.031 "serial_number": "SPDK00000000000001" 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "nvmf_subsystem_add_host", 00:15:08.031 "params": { 00:15:08.031 "host": "nqn.2016-06.io.spdk:host1", 00:15:08.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.031 "psk": "/tmp/tmp.Ukncj92f3M" 00:15:08.031 } 00:15:08.031 }, 00:15:08.031 { 00:15:08.031 "method": "nvmf_subsystem_add_ns", 00:15:08.031 "params": { 00:15:08.031 "namespace": { 00:15:08.031 "bdev_name": "malloc0", 00:15:08.031 "nguid": "1ABDA1819F334470B8A5F9F58AACEAA9", 00:15:08.031 "no_auto_visible": false, 00:15:08.031 "nsid": 1, 00:15:08.031 "uuid": "1abda181-9f33-4470-b8a5-f9f58aaceaa9" 00:15:08.031 }, 00:15:08.031 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:08.031 } 00:15:08.032 }, 00:15:08.032 { 00:15:08.032 "method": "nvmf_subsystem_add_listener", 00:15:08.032 "params": { 00:15:08.032 "listen_address": { 00:15:08.032 "adrfam": "IPv4", 00:15:08.032 "traddr": "10.0.0.2", 00:15:08.032 "trsvcid": "4420", 00:15:08.032 "trtype": "TCP" 00:15:08.032 }, 00:15:08.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.032 "secure_channel": true 00:15:08.032 } 00:15:08.032 } 00:15:08.032 ] 00:15:08.032 } 00:15:08.032 ] 00:15:08.032 }' 00:15:08.032 18:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:08.291 18:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:08.291 "subsystems": [ 00:15:08.291 { 00:15:08.291 "subsystem": "keyring", 00:15:08.291 "config": [] 00:15:08.291 }, 00:15:08.291 { 00:15:08.291 "subsystem": "iobuf", 00:15:08.291 "config": [ 00:15:08.291 { 00:15:08.291 "method": "iobuf_set_options", 00:15:08.291 "params": { 00:15:08.291 "large_bufsize": 135168, 00:15:08.291 "large_pool_count": 1024, 00:15:08.291 "small_bufsize": 8192, 00:15:08.291 "small_pool_count": 8192 00:15:08.291 } 00:15:08.291 } 00:15:08.291 ] 00:15:08.291 }, 00:15:08.291 { 00:15:08.291 "subsystem": "sock", 00:15:08.291 "config": [ 00:15:08.291 { 00:15:08.291 "method": "sock_set_default_impl", 00:15:08.291 "params": { 00:15:08.291 "impl_name": "posix" 00:15:08.291 } 00:15:08.291 }, 00:15:08.291 { 00:15:08.291 "method": "sock_impl_set_options", 00:15:08.291 "params": { 00:15:08.291 "enable_ktls": false, 00:15:08.291 "enable_placement_id": 0, 00:15:08.291 "enable_quickack": false, 00:15:08.291 "enable_recv_pipe": true, 00:15:08.291 "enable_zerocopy_send_client": false, 00:15:08.291 "enable_zerocopy_send_server": true, 00:15:08.291 "impl_name": "ssl", 00:15:08.291 "recv_buf_size": 4096, 00:15:08.291 "send_buf_size": 4096, 00:15:08.291 "tls_version": 0, 00:15:08.291 "zerocopy_threshold": 0 00:15:08.291 } 00:15:08.291 }, 00:15:08.291 { 00:15:08.291 "method": "sock_impl_set_options", 00:15:08.291 "params": { 00:15:08.291 "enable_ktls": false, 00:15:08.291 "enable_placement_id": 0, 00:15:08.291 "enable_quickack": false, 00:15:08.291 "enable_recv_pipe": true, 00:15:08.291 "enable_zerocopy_send_client": false, 00:15:08.291 "enable_zerocopy_send_server": true, 00:15:08.292 "impl_name": "posix", 00:15:08.292 "recv_buf_size": 2097152, 00:15:08.292 "send_buf_size": 2097152, 00:15:08.292 "tls_version": 0, 00:15:08.292 "zerocopy_threshold": 0 00:15:08.292 } 00:15:08.292 } 00:15:08.292 ] 00:15:08.292 }, 00:15:08.292 { 00:15:08.292 "subsystem": "vmd", 00:15:08.292 "config": [] 00:15:08.292 }, 00:15:08.292 { 00:15:08.292 "subsystem": "accel", 00:15:08.292 "config": [ 00:15:08.292 { 00:15:08.292 "method": "accel_set_options", 00:15:08.292 "params": { 00:15:08.292 "buf_count": 2048, 00:15:08.292 "large_cache_size": 16, 00:15:08.292 "sequence_count": 2048, 00:15:08.292 "small_cache_size": 128, 00:15:08.292 "task_count": 2048 00:15:08.292 } 00:15:08.292 } 00:15:08.292 ] 00:15:08.292 }, 00:15:08.292 { 00:15:08.292 "subsystem": "bdev", 00:15:08.292 "config": [ 00:15:08.292 { 00:15:08.292 "method": "bdev_set_options", 00:15:08.292 "params": { 00:15:08.292 "bdev_auto_examine": true, 00:15:08.292 "bdev_io_cache_size": 256, 00:15:08.292 "bdev_io_pool_size": 65535, 00:15:08.292 "iobuf_large_cache_size": 16, 00:15:08.292 "iobuf_small_cache_size": 128 00:15:08.292 } 00:15:08.292 }, 00:15:08.292 { 00:15:08.292 "method": "bdev_raid_set_options", 00:15:08.292 "params": { 00:15:08.292 "process_window_size_kb": 1024 00:15:08.292 } 00:15:08.292 }, 00:15:08.292 { 00:15:08.292 "method": "bdev_iscsi_set_options", 00:15:08.292 "params": { 00:15:08.292 "timeout_sec": 30 00:15:08.292 } 00:15:08.292 }, 00:15:08.292 { 00:15:08.292 "method": "bdev_nvme_set_options", 00:15:08.292 "params": { 00:15:08.292 "action_on_timeout": "none", 00:15:08.292 "allow_accel_sequence": false, 00:15:08.292 "arbitration_burst": 0, 00:15:08.292 "bdev_retry_count": 3, 00:15:08.292 "ctrlr_loss_timeout_sec": 0, 00:15:08.292 "delay_cmd_submit": true, 00:15:08.292 "dhchap_dhgroups": [ 00:15:08.292 "null", 00:15:08.292 "ffdhe2048", 00:15:08.292 "ffdhe3072", 00:15:08.292 "ffdhe4096", 00:15:08.292 "ffdhe6144", 00:15:08.292 "ffdhe8192" 00:15:08.292 ], 00:15:08.292 "dhchap_digests": [ 00:15:08.292 "sha256", 00:15:08.292 "sha384", 00:15:08.292 "sha512" 00:15:08.292 ], 00:15:08.292 "disable_auto_failback": false, 00:15:08.292 "fast_io_fail_timeout_sec": 0, 00:15:08.292 "generate_uuids": false, 00:15:08.292 "high_priority_weight": 0, 00:15:08.292 "io_path_stat": false, 00:15:08.292 "io_queue_requests": 512, 00:15:08.292 "keep_alive_timeout_ms": 10000, 00:15:08.292 "low_priority_weight": 0, 00:15:08.292 "medium_priority_weight": 0, 00:15:08.292 "nvme_adminq_poll_period_us": 10000, 00:15:08.292 "nvme_error_stat": false, 00:15:08.292 "nvme_ioq_poll_period_us": 0, 00:15:08.292 "rdma_cm_event_timeout_ms": 0, 00:15:08.292 "rdma_max_cq_size": 0, 00:15:08.292 "rdma_srq_size": 0, 00:15:08.292 "reconnect_delay_sec": 0, 00:15:08.292 "timeout_admin_us": 0, 00:15:08.292 "timeout_us": 0, 00:15:08.292 "transport_ack_timeout": 0, 00:15:08.292 "transport_retry_count": 4, 00:15:08.292 "transport_tos": 0 00:15:08.292 } 00:15:08.292 }, 00:15:08.292 { 00:15:08.292 "method": "bdev_nvme_attach_controller", 00:15:08.292 "params": { 00:15:08.292 "adrfam": "IPv4", 00:15:08.292 "ctrlr_loss_timeout_sec": 0, 00:15:08.292 "ddgst": false, 00:15:08.292 "fast_io_fail_timeout_sec": 0, 00:15:08.292 "hdgst": false, 00:15:08.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.292 "name": "TLSTEST", 00:15:08.292 "prchk_guard": false, 00:15:08.292 "prchk_reftag": false, 00:15:08.292 "psk": "/tmp/tmp.Ukncj92f3M", 00:15:08.292 "reconnect_delay_sec": 0, 00:15:08.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.292 "traddr": "10.0.0.2", 00:15:08.292 "trsvcid": "4420", 00:15:08.292 "trtype": "TCP" 00:15:08.292 } 00:15:08.292 }, 00:15:08.292 { 00:15:08.292 "method": "bdev_nvme_set_hotplug", 00:15:08.292 "params": { 00:15:08.292 "enable": false, 00:15:08.292 "period_us": 100000 00:15:08.292 } 00:15:08.292 }, 00:15:08.292 { 00:15:08.292 "method": "bdev_wait_for_examine" 00:15:08.292 } 00:15:08.292 ] 00:15:08.292 }, 00:15:08.292 { 00:15:08.292 "subsystem": "nbd", 00:15:08.292 "config": [] 00:15:08.292 } 00:15:08.292 ] 00:15:08.292 }' 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84183 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84183 ']' 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84183 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84183 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:08.292 killing process with pid 84183 00:15:08.292 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.292 00:15:08.292 Latency(us) 00:15:08.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.292 =================================================================================================================== 00:15:08.292 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84183' 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84183 00:15:08.292 [2024-07-15 18:34:30.835979] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:08.292 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84183 00:15:08.551 18:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84085 00:15:08.551 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84085 ']' 00:15:08.551 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84085 00:15:08.551 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:08.551 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.551 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84085 00:15:08.551 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:08.552 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:08.552 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84085' 00:15:08.552 killing process with pid 84085 00:15:08.552 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84085 00:15:08.552 [2024-07-15 18:34:31.059247] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:08.552 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84085 00:15:08.811 18:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:08.811 18:34:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:08.811 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.811 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.811 18:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:08.811 "subsystems": [ 00:15:08.811 { 00:15:08.811 "subsystem": "keyring", 00:15:08.811 "config": [] 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "subsystem": "iobuf", 00:15:08.811 "config": [ 00:15:08.811 { 00:15:08.811 "method": "iobuf_set_options", 00:15:08.811 "params": { 00:15:08.811 "large_bufsize": 135168, 00:15:08.811 "large_pool_count": 1024, 00:15:08.811 "small_bufsize": 8192, 00:15:08.811 "small_pool_count": 8192 00:15:08.811 } 00:15:08.811 } 00:15:08.811 ] 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "subsystem": "sock", 00:15:08.811 "config": [ 00:15:08.811 { 00:15:08.811 "method": "sock_set_default_impl", 00:15:08.811 "params": { 00:15:08.811 "impl_name": "posix" 00:15:08.811 } 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "method": "sock_impl_set_options", 00:15:08.811 "params": { 00:15:08.811 "enable_ktls": false, 00:15:08.811 "enable_placement_id": 0, 00:15:08.811 "enable_quickack": false, 00:15:08.811 "enable_recv_pipe": true, 00:15:08.811 "enable_zerocopy_send_client": false, 00:15:08.811 "enable_zerocopy_send_server": true, 00:15:08.811 "impl_name": "ssl", 00:15:08.811 "recv_buf_size": 4096, 00:15:08.811 "send_buf_size": 4096, 00:15:08.811 "tls_version": 0, 00:15:08.811 "zerocopy_threshold": 0 00:15:08.811 } 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "method": "sock_impl_set_options", 00:15:08.811 "params": { 00:15:08.811 "enable_ktls": false, 00:15:08.811 "enable_placement_id": 0, 00:15:08.811 "enable_quickack": false, 00:15:08.811 "enable_recv_pipe": true, 00:15:08.811 "enable_zerocopy_send_client": false, 00:15:08.811 "enable_zerocopy_send_server": true, 00:15:08.811 "impl_name": "posix", 00:15:08.811 "recv_buf_size": 2097152, 00:15:08.811 "send_buf_size": 2097152, 00:15:08.811 "tls_version": 0, 00:15:08.811 "zerocopy_threshold": 0 00:15:08.811 } 00:15:08.811 } 00:15:08.811 ] 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "subsystem": "vmd", 00:15:08.811 "config": [] 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "subsystem": "accel", 00:15:08.811 "config": [ 00:15:08.811 { 00:15:08.811 "method": "accel_set_options", 00:15:08.811 "params": { 00:15:08.811 "buf_count": 2048, 00:15:08.811 "large_cache_size": 16, 00:15:08.811 "sequence_count": 2048, 00:15:08.811 "small_cache_size": 128, 00:15:08.811 "task_count": 2048 00:15:08.811 } 00:15:08.811 } 00:15:08.811 ] 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "subsystem": "bdev", 00:15:08.811 "config": [ 00:15:08.811 { 00:15:08.811 "method": "bdev_set_options", 00:15:08.811 "params": { 00:15:08.811 "bdev_auto_examine": true, 00:15:08.811 "bdev_io_cache_size": 256, 00:15:08.811 "bdev_io_pool_size": 65535, 00:15:08.811 "iobuf_large_cache_size": 16, 00:15:08.811 "iobuf_small_cache_size": 128 00:15:08.811 } 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "method": "bdev_raid_set_options", 00:15:08.811 "params": { 00:15:08.811 "process_window_size_kb": 1024 00:15:08.811 } 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "method": "bdev_iscsi_set_options", 00:15:08.811 "params": { 00:15:08.811 "timeout_sec": 30 00:15:08.811 } 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "method": "bdev_nvme_set_options", 00:15:08.811 "params": { 00:15:08.811 "action_on_timeout": "none", 00:15:08.811 "allow_accel_sequence": false, 00:15:08.811 "arbitration_burst": 0, 00:15:08.811 "bdev_retry_count": 3, 00:15:08.811 "ctrlr_loss_timeout_sec": 0, 00:15:08.811 "delay_cmd_submit": true, 00:15:08.811 "dhchap_dhgroups": [ 00:15:08.811 "null", 00:15:08.811 "ffdhe2048", 00:15:08.811 "ffdhe3072", 00:15:08.811 "ffdhe4096", 00:15:08.811 "ffdhe6144", 00:15:08.811 "ffdhe8192" 00:15:08.811 ], 00:15:08.811 "dhchap_digests": [ 00:15:08.811 "sha256", 00:15:08.811 "sha384", 00:15:08.811 "sha512" 00:15:08.811 ], 00:15:08.811 "disable_auto_failback": false, 00:15:08.811 "fast_io_fail_timeout_sec": 0, 00:15:08.811 "generate_uuids": false, 00:15:08.811 "high_priority_weight": 0, 00:15:08.811 "io_path_stat": false, 00:15:08.811 "io_queue_requests": 0, 00:15:08.811 "keep_alive_timeout_ms": 10000, 00:15:08.811 "low_priority_weight": 0, 00:15:08.811 "medium_priority_weight": 0, 00:15:08.811 "nvme_adminq_poll_period_us": 10000, 00:15:08.811 "nvme_error_stat": false, 00:15:08.811 "nvme_ioq_poll_period_us": 0, 00:15:08.811 "rdma_cm_event_timeout_ms": 0, 00:15:08.811 "rdma_max_cq_size": 0, 00:15:08.811 "rdma_srq_size": 0, 00:15:08.811 "reconnect_delay_sec": 0, 00:15:08.811 "timeout_admin_us": 0, 00:15:08.811 "timeout_us": 0, 00:15:08.811 "transport_ack_timeout": 0, 00:15:08.811 "transport_retry_count": 4, 00:15:08.811 "transport_tos": 0 00:15:08.811 } 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "method": "bdev_nvme_set_hotplug", 00:15:08.811 "params": { 00:15:08.811 "enable": false, 00:15:08.811 "period_us": 100000 00:15:08.811 } 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "method": "bdev_malloc_create", 00:15:08.811 "params": { 00:15:08.811 "block_size": 4096, 00:15:08.811 "name": "malloc0", 00:15:08.811 "num_blocks": 8192, 00:15:08.811 "optimal_io_boundary": 0, 00:15:08.811 "physical_block_size": 4096, 00:15:08.811 "uuid": "1abda181-9f33-4470-b8a5-f9f58aaceaa9" 00:15:08.811 } 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "method": "bdev_wait_for_examine" 00:15:08.811 } 00:15:08.811 ] 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "subsystem": "nbd", 00:15:08.811 "config": [] 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "subsystem": "scheduler", 00:15:08.811 "config": [ 00:15:08.811 { 00:15:08.811 "method": "framework_set_scheduler", 00:15:08.811 "params": { 00:15:08.811 "name": "static" 00:15:08.811 } 00:15:08.811 } 00:15:08.811 ] 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "subsystem": "nvmf", 00:15:08.811 "config": [ 00:15:08.811 { 00:15:08.811 "method": "nvmf_set_config", 00:15:08.811 "params": { 00:15:08.811 "admin_cmd_passthru": { 00:15:08.811 "identify_ctrlr": false 00:15:08.811 }, 00:15:08.811 "discovery_filter": "match_any" 00:15:08.811 } 00:15:08.811 }, 00:15:08.811 { 00:15:08.812 "method": "nvmf_set_max_subsystems", 00:15:08.812 "params": { 00:15:08.812 "max_subsystems": 1024 00:15:08.812 } 00:15:08.812 }, 00:15:08.812 { 00:15:08.812 "method": "nvmf_set_crdt", 00:15:08.812 "params": { 00:15:08.812 "crdt1": 0, 00:15:08.812 "crdt2": 0, 00:15:08.812 "crdt3": 0 00:15:08.812 } 00:15:08.812 }, 00:15:08.812 { 00:15:08.812 "method": "nvmf_create_transport", 00:15:08.812 "params": { 00:15:08.812 "abort_timeout_sec": 1, 00:15:08.812 "ack_timeout": 0, 00:15:08.812 "buf_cache_size": 4294967295, 00:15:08.812 "c2h_success": false, 00:15:08.812 "data_wr_pool_size": 0, 00:15:08.812 "dif_insert_or_strip": false, 00:15:08.812 "in_capsule_data_size": 4096, 00:15:08.812 "io_unit_size": 131072, 00:15:08.812 "max_aq_depth": 128, 00:15:08.812 "max_io_qpairs_per_ctrlr": 127, 00:15:08.812 "max_io_size": 131072, 00:15:08.812 "max_queue_depth": 128, 00:15:08.812 "num_shared_buffers": 511, 00:15:08.812 "sock_priority": 0, 00:15:08.812 "trtype": "TCP", 00:15:08.812 "zcopy": false 00:15:08.812 } 00:15:08.812 }, 00:15:08.812 { 00:15:08.812 "method": "nvmf_create_subsystem", 00:15:08.812 "params": { 00:15:08.812 "allow_any_host": false, 00:15:08.812 "ana_reporting": false, 00:15:08.812 "max_cntlid": 65519, 00:15:08.812 "max_namespaces": 10, 00:15:08.812 "min_cntlid": 1, 00:15:08.812 "model_number": "SPDK bdev Controller", 00:15:08.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.812 "serial_number": "SPDK00000000000001" 00:15:08.812 } 00:15:08.812 }, 00:15:08.812 { 00:15:08.812 "method": "nvmf_subsystem_add_host", 00:15:08.812 "params": { 00:15:08.812 "host": "nqn.2016-06.io.spdk:host1", 00:15:08.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.812 "psk": "/tmp/tmp.Ukncj92f3M" 00:15:08.812 } 00:15:08.812 }, 00:15:08.812 { 00:15:08.812 "method": "nvmf_subsystem_add_ns", 00:15:08.812 "params": { 00:15:08.812 "namespace": { 00:15:08.812 "bdev_name": "malloc0", 00:15:08.812 "nguid": "1ABDA1819F334470B8A5F9F58AACEAA9", 00:15:08.812 "no_auto_visible": false, 00:15:08.812 "nsid": 1, 00:15:08.812 "uuid": "1abda181-9f33-4470-b8a5-f9f58aaceaa9" 00:15:08.812 }, 00:15:08.812 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:08.812 } 00:15:08.812 }, 00:15:08.812 { 00:15:08.812 "method": "nvmf_subsystem_add_listener", 00:15:08.812 "params": { 00:15:08.812 "listen_address": { 00:15:08.812 "adrfam": "IPv4", 00:15:08.812 "traddr": "10.0.0.2", 00:15:08.812 "trsvcid": "4420", 00:15:08.812 "trtype": "TCP" 00:15:08.812 }, 00:15:08.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.812 "secure_channel": true 00:15:08.812 } 00:15:08.812 } 00:15:08.812 ] 00:15:08.812 } 00:15:08.812 ] 00:15:08.812 }' 00:15:08.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.812 18:34:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84256 00:15:08.812 18:34:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84256 00:15:08.812 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84256 ']' 00:15:08.812 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.812 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.812 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.812 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.812 18:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.812 18:34:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:08.812 [2024-07-15 18:34:31.312155] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:08.812 [2024-07-15 18:34:31.312225] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.071 [2024-07-15 18:34:31.453889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.071 [2024-07-15 18:34:31.545398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.071 [2024-07-15 18:34:31.545445] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.071 [2024-07-15 18:34:31.545454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.071 [2024-07-15 18:34:31.545462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.071 [2024-07-15 18:34:31.545469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.071 [2024-07-15 18:34:31.545546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.330 [2024-07-15 18:34:31.751607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.330 [2024-07-15 18:34:31.767510] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:09.330 [2024-07-15 18:34:31.783491] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:09.330 [2024-07-15 18:34:31.783666] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.589 18:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.589 18:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:09.589 18:34:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:09.589 18:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:09.589 18:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.589 18:34:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.589 18:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84300 00:15:09.589 18:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:09.589 18:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84300 /var/tmp/bdevperf.sock 00:15:09.848 18:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84300 ']' 00:15:09.848 18:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.848 18:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.848 18:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:09.848 "subsystems": [ 00:15:09.848 { 00:15:09.848 "subsystem": "keyring", 00:15:09.848 "config": [] 00:15:09.848 }, 00:15:09.848 { 00:15:09.848 "subsystem": "iobuf", 00:15:09.848 "config": [ 00:15:09.848 { 00:15:09.848 "method": "iobuf_set_options", 00:15:09.848 "params": { 00:15:09.848 "large_bufsize": 135168, 00:15:09.848 "large_pool_count": 1024, 00:15:09.848 "small_bufsize": 8192, 00:15:09.848 "small_pool_count": 8192 00:15:09.848 } 00:15:09.848 } 00:15:09.848 ] 00:15:09.848 }, 00:15:09.848 { 00:15:09.848 "subsystem": "sock", 00:15:09.848 "config": [ 00:15:09.848 { 00:15:09.848 "method": "sock_set_default_impl", 00:15:09.848 "params": { 00:15:09.848 "impl_name": "posix" 00:15:09.848 } 00:15:09.848 }, 00:15:09.848 { 00:15:09.848 "method": "sock_impl_set_options", 00:15:09.848 "params": { 00:15:09.848 "enable_ktls": false, 00:15:09.848 "enable_placement_id": 0, 00:15:09.848 "enable_quickack": false, 00:15:09.848 "enable_recv_pipe": true, 00:15:09.848 "enable_zerocopy_send_client": false, 00:15:09.848 "enable_zerocopy_send_server": true, 00:15:09.848 "impl_name": "ssl", 00:15:09.848 "recv_buf_size": 4096, 00:15:09.848 "send_buf_size": 4096, 00:15:09.848 "tls_version": 0, 00:15:09.848 "zerocopy_threshold": 0 00:15:09.848 } 00:15:09.848 }, 00:15:09.848 { 00:15:09.848 "method": "sock_impl_set_options", 00:15:09.849 "params": { 00:15:09.849 "enable_ktls": false, 00:15:09.849 "enable_placement_id": 0, 00:15:09.849 "enable_quickack": false, 00:15:09.849 "enable_recv_pipe": true, 00:15:09.849 "enable_zerocopy_send_client": false, 00:15:09.849 "enable_zerocopy_send_server": true, 00:15:09.849 "impl_name": "posix", 00:15:09.849 "recv_buf_size": 2097152, 00:15:09.849 "send_buf_size": 2097152, 00:15:09.849 "tls_version": 0, 00:15:09.849 "zerocopy_threshold": 0 00:15:09.849 } 00:15:09.849 } 00:15:09.849 ] 00:15:09.849 }, 00:15:09.849 { 00:15:09.849 "subsystem": "vmd", 00:15:09.849 "config": [] 00:15:09.849 }, 00:15:09.849 { 00:15:09.849 "subsystem": "accel", 00:15:09.849 "config": [ 00:15:09.849 { 00:15:09.849 "method": "accel_set_options", 00:15:09.849 "params": { 00:15:09.849 "buf_count": 2048, 00:15:09.849 "large_cache_size": 16, 00:15:09.849 "sequence_count": 2048, 00:15:09.849 "small_cache_size": 128, 00:15:09.849 "task_count": 2048 00:15:09.849 } 00:15:09.849 } 00:15:09.849 ] 00:15:09.849 }, 00:15:09.849 { 00:15:09.849 "subsystem": "bdev", 00:15:09.849 "config": [ 00:15:09.849 { 00:15:09.849 "method": "bdev_set_options", 00:15:09.849 "params": { 00:15:09.849 "bdev_auto_examine": true, 00:15:09.849 "bdev_io_cache_size": 256, 00:15:09.849 "bdev_io_pool_size": 65535, 00:15:09.849 "iobuf_large_cache_size": 16, 00:15:09.849 "iobuf_small_cache_size": 128 00:15:09.849 } 00:15:09.849 }, 00:15:09.849 { 00:15:09.849 "method": "bdev_raid_set_options", 00:15:09.849 "params": { 00:15:09.849 "process_window_size_kb": 1024 00:15:09.849 } 00:15:09.849 }, 00:15:09.849 { 00:15:09.849 "method": "bdev_iscsi_set_options", 00:15:09.849 "params": { 00:15:09.849 "timeout_sec": 30 00:15:09.849 } 00:15:09.849 }, 00:15:09.849 { 00:15:09.849 "method": "bdev_nvme_set_options", 00:15:09.849 "params": { 00:15:09.849 "action_on_timeout": "none", 00:15:09.849 "allow_accel_sequence": false, 00:15:09.849 "arbitration_burst": 0, 00:15:09.849 "bdev_retry_count": 3, 00:15:09.849 "ctrlr_loss_timeout_sec": 0, 00:15:09.849 "delay_cmd_submit": true, 00:15:09.849 "dhchap_dhgroups": [ 00:15:09.849 "null", 00:15:09.849 "ffdhe2048", 00:15:09.849 "ffdhe3072", 00:15:09.849 "ffdhe4096", 00:15:09.849 "ffdhe6144", 00:15:09.849 "ffdhe8192" 00:15:09.849 ], 00:15:09.849 "dhchap_digests": [ 00:15:09.849 "sha256", 00:15:09.849 "sha384", 00:15:09.849 "sha512" 00:15:09.849 ], 00:15:09.849 "disable_auto_failback": false, 00:15:09.849 "fast_io_fail_timeout_sec": 0, 00:15:09.849 "generate_uuids": false, 00:15:09.849 "high_priority_weight": 0, 00:15:09.849 "io_path_stat": false, 00:15:09.849 "io_queue_requests": 512, 00:15:09.849 "keep_alive_timeout_ms": 10000, 00:15:09.849 "low_priority_weight": 0, 00:15:09.849 "medium_priority_weight": 0, 00:15:09.849 "nvme_adminq_poll_period_us": 10000, 00:15:09.849 "nvme_error_stat": false, 00:15:09.849 "nvme_ioq_poll_period_us": 0, 00:15:09.849 "rdma_cm_event_timeout_ms": 0, 00:15:09.849 "rdma_max_cq_size": 0, 00:15:09.849 "rdma_srq_size": 0, 00:15:09.849 "reconnect_delay_sec": 0, 00:15:09.849 "timeout_admin_us": 0, 00:15:09.849 "timeout_us": 0, 00:15:09.849 "transport_ack_timeout": 0, 00:15:09.849 "transport_retry_count": 4, 00:15:09.849 "transport_tos": 0 00:15:09.849 } 00:15:09.849 }, 00:15:09.849 { 00:15:09.849 "method": "bdev_nvme_attach_controller", 00:15:09.849 "params": { 00:15:09.849 "adrfam": "IPv4", 00:15:09.849 "ctrlr_loss_timeout_sec": 0, 00:15:09.849 "ddgst": false, 00:15:09.849 "fast_io_fail_timeout_sec": 0, 00:15:09.849 "hdgst": false, 00:15:09.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.849 "name": "TLSTEST", 00:15:09.849 "prchk_guard": false, 00:15:09.849 "prchk_reftag": false, 00:15:09.849 "psk": "/tmp/tmp.Ukncj92f3M", 00:15:09.849 "reconnect_delay_sec": 0, 00:15:09.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.849 "traddr": "10.0.0.2", 00:15:09.849 "trsvcid": "4420", 00:15:09.849 "trtype": "TCP" 00:15:09.849 } 00:15:09.849 }, 00:15:09.849 { 00:15:09.849 "method": "bdev_nvme_set_hotplug", 00:15:09.849 "params": { 00:15:09.849 "enable": false, 00:15:09.849 "period_us": 100000 00:15:09.849 } 00:15:09.849 }, 00:15:09.849 { 00:15:09.849 "method": "bdev_wait_for_examine" 00:15:09.849 } 00:15:09.849 ] 00:15:09.849 }, 00:15:09.849 { 00:15:09.849 "subsystem": "nbd", 00:15:09.849 "config": [] 00:15:09.849 } 00:15:09.849 ] 00:15:09.849 }' 00:15:09.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.849 18:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.849 18:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.849 18:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.849 [2024-07-15 18:34:32.249492] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:09.849 [2024-07-15 18:34:32.249579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84300 ] 00:15:09.849 [2024-07-15 18:34:32.392536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.108 [2024-07-15 18:34:32.483707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.108 [2024-07-15 18:34:32.627919] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:10.108 [2024-07-15 18:34:32.628022] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:10.675 18:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.675 18:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:10.675 18:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:10.675 Running I/O for 10 seconds... 00:15:20.647 00:15:20.647 Latency(us) 00:15:20.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.647 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:20.647 Verification LBA range: start 0x0 length 0x2000 00:15:20.647 TLSTESTn1 : 10.01 5704.94 22.28 0.00 0.00 22401.62 4684.90 16634.04 00:15:20.647 =================================================================================================================== 00:15:20.647 Total : 5704.94 22.28 0.00 0.00 22401.62 4684.90 16634.04 00:15:20.647 0 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84300 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84300 ']' 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84300 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84300 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84300' 00:15:20.647 killing process with pid 84300 00:15:20.647 Received shutdown signal, test time was about 10.000000 seconds 00:15:20.647 00:15:20.647 Latency(us) 00:15:20.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.647 =================================================================================================================== 00:15:20.647 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84300 00:15:20.647 [2024-07-15 18:34:43.227321] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:20.647 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84300 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84256 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84256 ']' 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84256 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84256 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84256' 00:15:20.904 killing process with pid 84256 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84256 00:15:20.904 [2024-07-15 18:34:43.443517] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:20.904 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84256 00:15:21.162 18:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:21.162 18:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.162 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:21.162 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.162 18:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84445 00:15:21.162 18:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:21.162 18:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84445 00:15:21.162 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84445 ']' 00:15:21.162 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.162 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.163 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.163 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.163 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.163 [2024-07-15 18:34:43.699005] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:21.163 [2024-07-15 18:34:43.699077] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.421 [2024-07-15 18:34:43.827029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.421 [2024-07-15 18:34:43.914318] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.421 [2024-07-15 18:34:43.914389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.421 [2024-07-15 18:34:43.914400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.421 [2024-07-15 18:34:43.914409] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.421 [2024-07-15 18:34:43.914415] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.421 [2024-07-15 18:34:43.914446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.988 18:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.988 18:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:21.988 18:34:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.988 18:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:21.988 18:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.246 18:34:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.246 18:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Ukncj92f3M 00:15:22.246 18:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ukncj92f3M 00:15:22.246 18:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:22.246 [2024-07-15 18:34:44.803069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.246 18:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:22.504 18:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:22.763 [2024-07-15 18:34:45.210444] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:22.763 [2024-07-15 18:34:45.210628] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.763 18:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:23.021 malloc0 00:15:23.021 18:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:23.021 18:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ukncj92f3M 00:15:23.280 [2024-07-15 18:34:45.802462] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:23.280 18:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=84542 00:15:23.280 18:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:23.280 18:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:23.280 18:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 84542 /var/tmp/bdevperf.sock 00:15:23.280 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84542 ']' 00:15:23.280 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:23.280 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.280 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:23.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:23.280 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.280 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.280 [2024-07-15 18:34:45.877772] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:23.280 [2024-07-15 18:34:45.877843] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84542 ] 00:15:23.539 [2024-07-15 18:34:46.017743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.539 [2024-07-15 18:34:46.102291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.474 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.475 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:24.475 18:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ukncj92f3M 00:15:24.475 18:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:24.733 [2024-07-15 18:34:47.092970] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:24.733 nvme0n1 00:15:24.733 18:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:24.733 Running I/O for 1 seconds... 00:15:26.109 00:15:26.109 Latency(us) 00:15:26.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.109 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.109 Verification LBA range: start 0x0 length 0x2000 00:15:26.109 nvme0n1 : 1.01 5688.03 22.22 0.00 0.00 22329.18 4684.90 18739.61 00:15:26.109 =================================================================================================================== 00:15:26.109 Total : 5688.03 22.22 0.00 0.00 22329.18 4684.90 18739.61 00:15:26.109 0 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 84542 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84542 ']' 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84542 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84542 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84542' 00:15:26.109 killing process with pid 84542 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84542 00:15:26.109 Received shutdown signal, test time was about 1.000000 seconds 00:15:26.109 00:15:26.109 Latency(us) 00:15:26.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.109 =================================================================================================================== 00:15:26.109 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84542 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84445 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84445 ']' 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84445 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84445 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:26.109 killing process with pid 84445 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84445' 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84445 00:15:26.109 [2024-07-15 18:34:48.569694] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:26.109 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84445 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84612 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84612 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84612 ']' 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.368 18:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.368 [2024-07-15 18:34:48.827025] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:26.368 [2024-07-15 18:34:48.827098] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.368 [2024-07-15 18:34:48.969783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.627 [2024-07-15 18:34:49.059502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.627 [2024-07-15 18:34:49.059555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.627 [2024-07-15 18:34:49.059573] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.627 [2024-07-15 18:34:49.059582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.627 [2024-07-15 18:34:49.059589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.627 [2024-07-15 18:34:49.059618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.194 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.194 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:27.194 18:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:27.194 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.194 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.194 18:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.194 18:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:15:27.194 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.194 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.194 [2024-07-15 18:34:49.750044] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.194 malloc0 00:15:27.194 [2024-07-15 18:34:49.778742] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:27.195 [2024-07-15 18:34:49.778922] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.195 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.453 18:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=84662 00:15:27.453 18:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:27.453 18:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 84662 /var/tmp/bdevperf.sock 00:15:27.453 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84662 ']' 00:15:27.453 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.454 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.454 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.454 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.454 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.454 [2024-07-15 18:34:49.857934] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:27.454 [2024-07-15 18:34:49.858010] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84662 ] 00:15:27.454 [2024-07-15 18:34:49.997510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.727 [2024-07-15 18:34:50.089148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.296 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.296 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:28.296 18:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ukncj92f3M 00:15:28.556 18:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:28.556 [2024-07-15 18:34:51.080290] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:28.556 nvme0n1 00:15:28.816 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:28.816 Running I/O for 1 seconds... 00:15:29.753 00:15:29.753 Latency(us) 00:15:29.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.753 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:29.753 Verification LBA range: start 0x0 length 0x2000 00:15:29.753 nvme0n1 : 1.01 5762.54 22.51 0.00 0.00 22045.98 4974.42 16002.36 00:15:29.753 =================================================================================================================== 00:15:29.753 Total : 5762.54 22.51 0.00 0.00 22045.98 4974.42 16002.36 00:15:29.753 0 00:15:29.754 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:15:29.754 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.754 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.017 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.017 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:15:30.017 "subsystems": [ 00:15:30.017 { 00:15:30.017 "subsystem": "keyring", 00:15:30.017 "config": [ 00:15:30.017 { 00:15:30.017 "method": "keyring_file_add_key", 00:15:30.017 "params": { 00:15:30.017 "name": "key0", 00:15:30.017 "path": "/tmp/tmp.Ukncj92f3M" 00:15:30.017 } 00:15:30.017 } 00:15:30.017 ] 00:15:30.017 }, 00:15:30.017 { 00:15:30.017 "subsystem": "iobuf", 00:15:30.017 "config": [ 00:15:30.017 { 00:15:30.017 "method": "iobuf_set_options", 00:15:30.017 "params": { 00:15:30.017 "large_bufsize": 135168, 00:15:30.017 "large_pool_count": 1024, 00:15:30.017 "small_bufsize": 8192, 00:15:30.017 "small_pool_count": 8192 00:15:30.017 } 00:15:30.017 } 00:15:30.017 ] 00:15:30.017 }, 00:15:30.017 { 00:15:30.017 "subsystem": "sock", 00:15:30.017 "config": [ 00:15:30.017 { 00:15:30.017 "method": "sock_set_default_impl", 00:15:30.017 "params": { 00:15:30.017 "impl_name": "posix" 00:15:30.017 } 00:15:30.017 }, 00:15:30.017 { 00:15:30.017 "method": "sock_impl_set_options", 00:15:30.017 "params": { 00:15:30.017 "enable_ktls": false, 00:15:30.017 "enable_placement_id": 0, 00:15:30.017 "enable_quickack": false, 00:15:30.017 "enable_recv_pipe": true, 00:15:30.017 "enable_zerocopy_send_client": false, 00:15:30.017 "enable_zerocopy_send_server": true, 00:15:30.017 "impl_name": "ssl", 00:15:30.017 "recv_buf_size": 4096, 00:15:30.017 "send_buf_size": 4096, 00:15:30.017 "tls_version": 0, 00:15:30.017 "zerocopy_threshold": 0 00:15:30.017 } 00:15:30.017 }, 00:15:30.017 { 00:15:30.017 "method": "sock_impl_set_options", 00:15:30.017 "params": { 00:15:30.017 "enable_ktls": false, 00:15:30.017 "enable_placement_id": 0, 00:15:30.017 "enable_quickack": false, 00:15:30.017 "enable_recv_pipe": true, 00:15:30.017 "enable_zerocopy_send_client": false, 00:15:30.017 "enable_zerocopy_send_server": true, 00:15:30.017 "impl_name": "posix", 00:15:30.017 "recv_buf_size": 2097152, 00:15:30.017 "send_buf_size": 2097152, 00:15:30.017 "tls_version": 0, 00:15:30.017 "zerocopy_threshold": 0 00:15:30.017 } 00:15:30.017 } 00:15:30.017 ] 00:15:30.017 }, 00:15:30.017 { 00:15:30.017 "subsystem": "vmd", 00:15:30.017 "config": [] 00:15:30.017 }, 00:15:30.017 { 00:15:30.017 "subsystem": "accel", 00:15:30.017 "config": [ 00:15:30.017 { 00:15:30.017 "method": "accel_set_options", 00:15:30.017 "params": { 00:15:30.017 "buf_count": 2048, 00:15:30.017 "large_cache_size": 16, 00:15:30.017 "sequence_count": 2048, 00:15:30.017 "small_cache_size": 128, 00:15:30.017 "task_count": 2048 00:15:30.017 } 00:15:30.017 } 00:15:30.017 ] 00:15:30.017 }, 00:15:30.017 { 00:15:30.017 "subsystem": "bdev", 00:15:30.017 "config": [ 00:15:30.017 { 00:15:30.017 "method": "bdev_set_options", 00:15:30.017 "params": { 00:15:30.017 "bdev_auto_examine": true, 00:15:30.017 "bdev_io_cache_size": 256, 00:15:30.017 "bdev_io_pool_size": 65535, 00:15:30.017 "iobuf_large_cache_size": 16, 00:15:30.017 "iobuf_small_cache_size": 128 00:15:30.017 } 00:15:30.017 }, 00:15:30.017 { 00:15:30.017 "method": "bdev_raid_set_options", 00:15:30.017 "params": { 00:15:30.017 "process_window_size_kb": 1024 00:15:30.017 } 00:15:30.017 }, 00:15:30.017 { 00:15:30.017 "method": "bdev_iscsi_set_options", 00:15:30.017 "params": { 00:15:30.017 "timeout_sec": 30 00:15:30.017 } 00:15:30.017 }, 00:15:30.017 { 00:15:30.017 "method": "bdev_nvme_set_options", 00:15:30.017 "params": { 00:15:30.017 "action_on_timeout": "none", 00:15:30.017 "allow_accel_sequence": false, 00:15:30.017 "arbitration_burst": 0, 00:15:30.017 "bdev_retry_count": 3, 00:15:30.017 "ctrlr_loss_timeout_sec": 0, 00:15:30.017 "delay_cmd_submit": true, 00:15:30.017 "dhchap_dhgroups": [ 00:15:30.017 "null", 00:15:30.017 "ffdhe2048", 00:15:30.017 "ffdhe3072", 00:15:30.017 "ffdhe4096", 00:15:30.017 "ffdhe6144", 00:15:30.017 "ffdhe8192" 00:15:30.017 ], 00:15:30.017 "dhchap_digests": [ 00:15:30.017 "sha256", 00:15:30.017 "sha384", 00:15:30.018 "sha512" 00:15:30.018 ], 00:15:30.018 "disable_auto_failback": false, 00:15:30.018 "fast_io_fail_timeout_sec": 0, 00:15:30.018 "generate_uuids": false, 00:15:30.018 "high_priority_weight": 0, 00:15:30.018 "io_path_stat": false, 00:15:30.018 "io_queue_requests": 0, 00:15:30.018 "keep_alive_timeout_ms": 10000, 00:15:30.018 "low_priority_weight": 0, 00:15:30.018 "medium_priority_weight": 0, 00:15:30.018 "nvme_adminq_poll_period_us": 10000, 00:15:30.018 "nvme_error_stat": false, 00:15:30.018 "nvme_ioq_poll_period_us": 0, 00:15:30.018 "rdma_cm_event_timeout_ms": 0, 00:15:30.018 "rdma_max_cq_size": 0, 00:15:30.018 "rdma_srq_size": 0, 00:15:30.018 "reconnect_delay_sec": 0, 00:15:30.018 "timeout_admin_us": 0, 00:15:30.018 "timeout_us": 0, 00:15:30.018 "transport_ack_timeout": 0, 00:15:30.018 "transport_retry_count": 4, 00:15:30.018 "transport_tos": 0 00:15:30.018 } 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "method": "bdev_nvme_set_hotplug", 00:15:30.018 "params": { 00:15:30.018 "enable": false, 00:15:30.018 "period_us": 100000 00:15:30.018 } 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "method": "bdev_malloc_create", 00:15:30.018 "params": { 00:15:30.018 "block_size": 4096, 00:15:30.018 "name": "malloc0", 00:15:30.018 "num_blocks": 8192, 00:15:30.018 "optimal_io_boundary": 0, 00:15:30.018 "physical_block_size": 4096, 00:15:30.018 "uuid": "2465f632-cfab-4f0c-b7d6-e91f17990a69" 00:15:30.018 } 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "method": "bdev_wait_for_examine" 00:15:30.018 } 00:15:30.018 ] 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "subsystem": "nbd", 00:15:30.018 "config": [] 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "subsystem": "scheduler", 00:15:30.018 "config": [ 00:15:30.018 { 00:15:30.018 "method": "framework_set_scheduler", 00:15:30.018 "params": { 00:15:30.018 "name": "static" 00:15:30.018 } 00:15:30.018 } 00:15:30.018 ] 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "subsystem": "nvmf", 00:15:30.018 "config": [ 00:15:30.018 { 00:15:30.018 "method": "nvmf_set_config", 00:15:30.018 "params": { 00:15:30.018 "admin_cmd_passthru": { 00:15:30.018 "identify_ctrlr": false 00:15:30.018 }, 00:15:30.018 "discovery_filter": "match_any" 00:15:30.018 } 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "method": "nvmf_set_max_subsystems", 00:15:30.018 "params": { 00:15:30.018 "max_subsystems": 1024 00:15:30.018 } 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "method": "nvmf_set_crdt", 00:15:30.018 "params": { 00:15:30.018 "crdt1": 0, 00:15:30.018 "crdt2": 0, 00:15:30.018 "crdt3": 0 00:15:30.018 } 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "method": "nvmf_create_transport", 00:15:30.018 "params": { 00:15:30.018 "abort_timeout_sec": 1, 00:15:30.018 "ack_timeout": 0, 00:15:30.018 "buf_cache_size": 4294967295, 00:15:30.018 "c2h_success": false, 00:15:30.018 "data_wr_pool_size": 0, 00:15:30.018 "dif_insert_or_strip": false, 00:15:30.018 "in_capsule_data_size": 4096, 00:15:30.018 "io_unit_size": 131072, 00:15:30.018 "max_aq_depth": 128, 00:15:30.018 "max_io_qpairs_per_ctrlr": 127, 00:15:30.018 "max_io_size": 131072, 00:15:30.018 "max_queue_depth": 128, 00:15:30.018 "num_shared_buffers": 511, 00:15:30.018 "sock_priority": 0, 00:15:30.018 "trtype": "TCP", 00:15:30.018 "zcopy": false 00:15:30.018 } 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "method": "nvmf_create_subsystem", 00:15:30.018 "params": { 00:15:30.018 "allow_any_host": false, 00:15:30.018 "ana_reporting": false, 00:15:30.018 "max_cntlid": 65519, 00:15:30.018 "max_namespaces": 32, 00:15:30.018 "min_cntlid": 1, 00:15:30.018 "model_number": "SPDK bdev Controller", 00:15:30.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.018 "serial_number": "00000000000000000000" 00:15:30.018 } 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "method": "nvmf_subsystem_add_host", 00:15:30.018 "params": { 00:15:30.018 "host": "nqn.2016-06.io.spdk:host1", 00:15:30.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.018 "psk": "key0" 00:15:30.018 } 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "method": "nvmf_subsystem_add_ns", 00:15:30.018 "params": { 00:15:30.018 "namespace": { 00:15:30.018 "bdev_name": "malloc0", 00:15:30.018 "nguid": "2465F632CFAB4F0CB7D6E91F17990A69", 00:15:30.018 "no_auto_visible": false, 00:15:30.018 "nsid": 1, 00:15:30.018 "uuid": "2465f632-cfab-4f0c-b7d6-e91f17990a69" 00:15:30.018 }, 00:15:30.018 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:30.018 } 00:15:30.018 }, 00:15:30.018 { 00:15:30.018 "method": "nvmf_subsystem_add_listener", 00:15:30.018 "params": { 00:15:30.018 "listen_address": { 00:15:30.018 "adrfam": "IPv4", 00:15:30.018 "traddr": "10.0.0.2", 00:15:30.018 "trsvcid": "4420", 00:15:30.018 "trtype": "TCP" 00:15:30.018 }, 00:15:30.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.018 "secure_channel": false, 00:15:30.018 "sock_impl": "ssl" 00:15:30.018 } 00:15:30.018 } 00:15:30.018 ] 00:15:30.018 } 00:15:30.018 ] 00:15:30.018 }' 00:15:30.018 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:30.278 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:15:30.278 "subsystems": [ 00:15:30.278 { 00:15:30.278 "subsystem": "keyring", 00:15:30.278 "config": [ 00:15:30.278 { 00:15:30.278 "method": "keyring_file_add_key", 00:15:30.278 "params": { 00:15:30.278 "name": "key0", 00:15:30.278 "path": "/tmp/tmp.Ukncj92f3M" 00:15:30.278 } 00:15:30.278 } 00:15:30.278 ] 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "subsystem": "iobuf", 00:15:30.278 "config": [ 00:15:30.278 { 00:15:30.278 "method": "iobuf_set_options", 00:15:30.278 "params": { 00:15:30.278 "large_bufsize": 135168, 00:15:30.278 "large_pool_count": 1024, 00:15:30.278 "small_bufsize": 8192, 00:15:30.278 "small_pool_count": 8192 00:15:30.278 } 00:15:30.278 } 00:15:30.278 ] 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "subsystem": "sock", 00:15:30.278 "config": [ 00:15:30.278 { 00:15:30.278 "method": "sock_set_default_impl", 00:15:30.278 "params": { 00:15:30.278 "impl_name": "posix" 00:15:30.278 } 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "method": "sock_impl_set_options", 00:15:30.278 "params": { 00:15:30.278 "enable_ktls": false, 00:15:30.278 "enable_placement_id": 0, 00:15:30.278 "enable_quickack": false, 00:15:30.278 "enable_recv_pipe": true, 00:15:30.278 "enable_zerocopy_send_client": false, 00:15:30.278 "enable_zerocopy_send_server": true, 00:15:30.278 "impl_name": "ssl", 00:15:30.278 "recv_buf_size": 4096, 00:15:30.278 "send_buf_size": 4096, 00:15:30.278 "tls_version": 0, 00:15:30.278 "zerocopy_threshold": 0 00:15:30.278 } 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "method": "sock_impl_set_options", 00:15:30.278 "params": { 00:15:30.278 "enable_ktls": false, 00:15:30.278 "enable_placement_id": 0, 00:15:30.278 "enable_quickack": false, 00:15:30.278 "enable_recv_pipe": true, 00:15:30.278 "enable_zerocopy_send_client": false, 00:15:30.278 "enable_zerocopy_send_server": true, 00:15:30.278 "impl_name": "posix", 00:15:30.278 "recv_buf_size": 2097152, 00:15:30.278 "send_buf_size": 2097152, 00:15:30.278 "tls_version": 0, 00:15:30.278 "zerocopy_threshold": 0 00:15:30.278 } 00:15:30.278 } 00:15:30.278 ] 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "subsystem": "vmd", 00:15:30.278 "config": [] 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "subsystem": "accel", 00:15:30.278 "config": [ 00:15:30.278 { 00:15:30.278 "method": "accel_set_options", 00:15:30.278 "params": { 00:15:30.278 "buf_count": 2048, 00:15:30.278 "large_cache_size": 16, 00:15:30.278 "sequence_count": 2048, 00:15:30.278 "small_cache_size": 128, 00:15:30.278 "task_count": 2048 00:15:30.278 } 00:15:30.278 } 00:15:30.278 ] 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "subsystem": "bdev", 00:15:30.278 "config": [ 00:15:30.278 { 00:15:30.278 "method": "bdev_set_options", 00:15:30.278 "params": { 00:15:30.278 "bdev_auto_examine": true, 00:15:30.278 "bdev_io_cache_size": 256, 00:15:30.278 "bdev_io_pool_size": 65535, 00:15:30.278 "iobuf_large_cache_size": 16, 00:15:30.278 "iobuf_small_cache_size": 128 00:15:30.278 } 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "method": "bdev_raid_set_options", 00:15:30.278 "params": { 00:15:30.278 "process_window_size_kb": 1024 00:15:30.278 } 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "method": "bdev_iscsi_set_options", 00:15:30.278 "params": { 00:15:30.278 "timeout_sec": 30 00:15:30.278 } 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "method": "bdev_nvme_set_options", 00:15:30.278 "params": { 00:15:30.278 "action_on_timeout": "none", 00:15:30.278 "allow_accel_sequence": false, 00:15:30.278 "arbitration_burst": 0, 00:15:30.278 "bdev_retry_count": 3, 00:15:30.278 "ctrlr_loss_timeout_sec": 0, 00:15:30.278 "delay_cmd_submit": true, 00:15:30.278 "dhchap_dhgroups": [ 00:15:30.278 "null", 00:15:30.278 "ffdhe2048", 00:15:30.278 "ffdhe3072", 00:15:30.278 "ffdhe4096", 00:15:30.278 "ffdhe6144", 00:15:30.278 "ffdhe8192" 00:15:30.278 ], 00:15:30.278 "dhchap_digests": [ 00:15:30.278 "sha256", 00:15:30.278 "sha384", 00:15:30.278 "sha512" 00:15:30.278 ], 00:15:30.278 "disable_auto_failback": false, 00:15:30.278 "fast_io_fail_timeout_sec": 0, 00:15:30.278 "generate_uuids": false, 00:15:30.278 "high_priority_weight": 0, 00:15:30.278 "io_path_stat": false, 00:15:30.278 "io_queue_requests": 512, 00:15:30.278 "keep_alive_timeout_ms": 10000, 00:15:30.278 "low_priority_weight": 0, 00:15:30.278 "medium_priority_weight": 0, 00:15:30.278 "nvme_adminq_poll_period_us": 10000, 00:15:30.278 "nvme_error_stat": false, 00:15:30.278 "nvme_ioq_poll_period_us": 0, 00:15:30.278 "rdma_cm_event_timeout_ms": 0, 00:15:30.278 "rdma_max_cq_size": 0, 00:15:30.278 "rdma_srq_size": 0, 00:15:30.278 "reconnect_delay_sec": 0, 00:15:30.278 "timeout_admin_us": 0, 00:15:30.278 "timeout_us": 0, 00:15:30.278 "transport_ack_timeout": 0, 00:15:30.278 "transport_retry_count": 4, 00:15:30.278 "transport_tos": 0 00:15:30.278 } 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "method": "bdev_nvme_attach_controller", 00:15:30.278 "params": { 00:15:30.278 "adrfam": "IPv4", 00:15:30.278 "ctrlr_loss_timeout_sec": 0, 00:15:30.278 "ddgst": false, 00:15:30.278 "fast_io_fail_timeout_sec": 0, 00:15:30.278 "hdgst": false, 00:15:30.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.278 "name": "nvme0", 00:15:30.278 "prchk_guard": false, 00:15:30.278 "prchk_reftag": false, 00:15:30.278 "psk": "key0", 00:15:30.278 "reconnect_delay_sec": 0, 00:15:30.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.278 "traddr": "10.0.0.2", 00:15:30.278 "trsvcid": "4420", 00:15:30.278 "trtype": "TCP" 00:15:30.278 } 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "method": "bdev_nvme_set_hotplug", 00:15:30.278 "params": { 00:15:30.278 "enable": false, 00:15:30.278 "period_us": 100000 00:15:30.278 } 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "method": "bdev_enable_histogram", 00:15:30.278 "params": { 00:15:30.278 "enable": true, 00:15:30.278 "name": "nvme0n1" 00:15:30.278 } 00:15:30.278 }, 00:15:30.278 { 00:15:30.278 "method": "bdev_wait_for_examine" 00:15:30.279 } 00:15:30.279 ] 00:15:30.279 }, 00:15:30.279 { 00:15:30.279 "subsystem": "nbd", 00:15:30.279 "config": [] 00:15:30.279 } 00:15:30.279 ] 00:15:30.279 }' 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 84662 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84662 ']' 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84662 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84662 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:30.279 killing process with pid 84662 00:15:30.279 Received shutdown signal, test time was about 1.000000 seconds 00:15:30.279 00:15:30.279 Latency(us) 00:15:30.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.279 =================================================================================================================== 00:15:30.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84662' 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84662 00:15:30.279 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84662 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 84612 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84612 ']' 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84612 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84612 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:30.538 killing process with pid 84612 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84612' 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84612 00:15:30.538 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84612 00:15:30.806 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:15:30.806 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.806 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.806 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.806 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:15:30.806 "subsystems": [ 00:15:30.806 { 00:15:30.806 "subsystem": "keyring", 00:15:30.806 "config": [ 00:15:30.806 { 00:15:30.806 "method": "keyring_file_add_key", 00:15:30.806 "params": { 00:15:30.806 "name": "key0", 00:15:30.806 "path": "/tmp/tmp.Ukncj92f3M" 00:15:30.806 } 00:15:30.806 } 00:15:30.806 ] 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "subsystem": "iobuf", 00:15:30.806 "config": [ 00:15:30.806 { 00:15:30.806 "method": "iobuf_set_options", 00:15:30.806 "params": { 00:15:30.806 "large_bufsize": 135168, 00:15:30.806 "large_pool_count": 1024, 00:15:30.806 "small_bufsize": 8192, 00:15:30.806 "small_pool_count": 8192 00:15:30.806 } 00:15:30.806 } 00:15:30.806 ] 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "subsystem": "sock", 00:15:30.806 "config": [ 00:15:30.806 { 00:15:30.806 "method": "sock_set_default_impl", 00:15:30.806 "params": { 00:15:30.806 "impl_name": "posix" 00:15:30.806 } 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "method": "sock_impl_set_options", 00:15:30.806 "params": { 00:15:30.806 "enable_ktls": false, 00:15:30.806 "enable_placement_id": 0, 00:15:30.806 "enable_quickack": false, 00:15:30.806 "enable_recv_pipe": true, 00:15:30.806 "enable_zerocopy_send_client": false, 00:15:30.806 "enable_zerocopy_send_server": true, 00:15:30.806 "impl_name": "ssl", 00:15:30.806 "recv_buf_size": 4096, 00:15:30.806 "send_buf_size": 4096, 00:15:30.806 "tls_version": 0, 00:15:30.806 "zerocopy_threshold": 0 00:15:30.806 } 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "method": "sock_impl_set_options", 00:15:30.806 "params": { 00:15:30.806 "enable_ktls": false, 00:15:30.806 "enable_placement_id": 0, 00:15:30.806 "enable_quickack": false, 00:15:30.806 "enable_recv_pipe": true, 00:15:30.806 "enable_zerocopy_send_client": false, 00:15:30.806 "enable_zerocopy_send_server": true, 00:15:30.806 "impl_name": "posix", 00:15:30.806 "recv_buf_size": 2097152, 00:15:30.806 "send_buf_size": 2097152, 00:15:30.806 "tls_version": 0, 00:15:30.806 "zerocopy_threshold": 0 00:15:30.806 } 00:15:30.806 } 00:15:30.806 ] 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "subsystem": "vmd", 00:15:30.806 "config": [] 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "subsystem": "accel", 00:15:30.806 "config": [ 00:15:30.806 { 00:15:30.806 "method": "accel_set_options", 00:15:30.806 "params": { 00:15:30.806 "buf_count": 2048, 00:15:30.806 "large_cache_size": 16, 00:15:30.806 "sequence_count": 2048, 00:15:30.806 "small_cache_size": 128, 00:15:30.806 "task_count": 2048 00:15:30.806 } 00:15:30.806 } 00:15:30.806 ] 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "subsystem": "bdev", 00:15:30.806 "config": [ 00:15:30.806 { 00:15:30.806 "method": "bdev_set_options", 00:15:30.806 "params": { 00:15:30.806 "bdev_auto_examine": true, 00:15:30.806 "bdev_io_cache_size": 256, 00:15:30.806 "bdev_io_pool_size": 65535, 00:15:30.806 "iobuf_large_cache_size": 16, 00:15:30.806 "iobuf_small_cache_size": 128 00:15:30.806 } 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "method": "bdev_raid_set_options", 00:15:30.806 "params": { 00:15:30.806 "process_window_size_kb": 1024 00:15:30.806 } 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "method": "bdev_iscsi_set_options", 00:15:30.806 "params": { 00:15:30.806 "timeout_sec": 30 00:15:30.806 } 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "method": "bdev_nvme_set_options", 00:15:30.806 "params": { 00:15:30.806 "action_on_timeout": "none", 00:15:30.806 "allow_accel_sequence": false, 00:15:30.806 "arbitration_burst": 0, 00:15:30.806 "bdev_retry_count": 3, 00:15:30.806 "ctrlr_loss_timeout_sec": 0, 00:15:30.806 "delay_cmd_submit": true, 00:15:30.806 "dhchap_dhgroups": [ 00:15:30.806 "null", 00:15:30.806 "ffdhe2048", 00:15:30.806 "ffdhe3072", 00:15:30.806 "ffdhe4096", 00:15:30.806 "ffdhe6144", 00:15:30.806 "ffdhe8192" 00:15:30.806 ], 00:15:30.806 "dhchap_digests": [ 00:15:30.806 "sha256", 00:15:30.806 "sha384", 00:15:30.806 "sha512" 00:15:30.806 ], 00:15:30.806 "disable_auto_failback": false, 00:15:30.806 "fast_io_fail_timeout_sec": 0, 00:15:30.806 "generate_uuids": false, 00:15:30.806 "high_priority_weight": 0, 00:15:30.806 "io_path_stat": false, 00:15:30.806 "io_queue_requests": 0, 00:15:30.806 "keep_alive_timeout_ms": 10000, 00:15:30.806 "low_priority_weight": 0, 00:15:30.806 "medium_priority_weight": 0, 00:15:30.806 "nvme_adminq_poll_period_us": 10000, 00:15:30.806 "nvme_error_stat": false, 00:15:30.806 "nvme_ioq_poll_period_us": 0, 00:15:30.806 "rdma_cm_event_timeout_ms": 0, 00:15:30.806 "rdma_max_cq_size": 0, 00:15:30.806 "rdma_srq_size": 0, 00:15:30.806 "reconnect_delay_sec": 0, 00:15:30.806 "timeout_admin_us": 0, 00:15:30.806 "timeout_us": 0, 00:15:30.806 "transport_ack_timeout": 0, 00:15:30.806 "transport_retry_count": 4, 00:15:30.806 "transport_tos": 0 00:15:30.806 } 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "method": "bdev_nvme_set_hotplug", 00:15:30.806 "params": { 00:15:30.806 "enable": false, 00:15:30.806 "period_us": 100000 00:15:30.806 } 00:15:30.806 }, 00:15:30.806 { 00:15:30.806 "method": "bdev_malloc_create", 00:15:30.806 "params": { 00:15:30.806 "block_size": 4096, 00:15:30.806 "name": "malloc0", 00:15:30.806 "num_blocks": 8192, 00:15:30.806 "optimal_io_boundary": 0, 00:15:30.806 "physical_block_size": 4096, 00:15:30.806 "uuid": "2465f632-cfab-4f0c-b7d6-e91f17990a69" 00:15:30.806 } 00:15:30.806 }, 00:15:30.807 { 00:15:30.807 "method": "bdev_wait_for_examine" 00:15:30.807 } 00:15:30.807 ] 00:15:30.807 }, 00:15:30.807 { 00:15:30.807 "subsystem": "nbd", 00:15:30.807 "config": [] 00:15:30.807 }, 00:15:30.807 { 00:15:30.807 "subsystem": "scheduler", 00:15:30.807 "config": [ 00:15:30.807 { 00:15:30.807 "method": "framework_set_scheduler", 00:15:30.807 "params": { 00:15:30.807 "name": "static" 00:15:30.807 } 00:15:30.807 } 00:15:30.807 ] 00:15:30.807 }, 00:15:30.807 { 00:15:30.807 "subsystem": "nvmf", 00:15:30.807 "config": [ 00:15:30.807 { 00:15:30.807 "method": "nvmf_set_config", 00:15:30.807 "params": { 00:15:30.807 "admin_cmd_passthru": { 00:15:30.807 "identify_ctrlr": false 00:15:30.807 }, 00:15:30.807 "discovery_filter": "match_any" 00:15:30.807 } 00:15:30.807 }, 00:15:30.807 { 00:15:30.807 "method": "nvmf_set_max_subsystems", 00:15:30.807 "params": { 00:15:30.807 "max_subsystems": 1024 00:15:30.807 } 00:15:30.807 }, 00:15:30.807 { 00:15:30.807 "method": "nvmf_set_crdt", 00:15:30.807 "params": { 00:15:30.807 "crdt1": 0, 00:15:30.807 "crdt2": 0, 00:15:30.807 "crdt3": 0 00:15:30.807 } 00:15:30.807 }, 00:15:30.807 { 00:15:30.807 "method": "nvmf_create_transport", 00:15:30.807 "params": { 00:15:30.807 "abort_timeout_sec": 1, 00:15:30.807 "ack_timeout": 0, 00:15:30.807 "buf_cache_size": 4294967295, 00:15:30.807 "c2h_success": false, 00:15:30.807 "data_wr_pool_size": 0, 00:15:30.807 "dif_insert_or_strip": false, 00:15:30.807 "in_capsule_data_size": 4096, 00:15:30.807 "io_unit_size": 131072, 00:15:30.807 "max_aq_depth": 128, 00:15:30.807 "max_io_qpairs_per_ctrlr": 127, 00:15:30.807 "max_io_size": 131072, 00:15:30.807 "max_queue_depth": 128, 00:15:30.807 "num_shared_buffers": 511, 00:15:30.807 "sock_priority": 0, 00:15:30.807 "trtype": "TCP", 00:15:30.807 "zcopy": false 00:15:30.807 } 00:15:30.807 }, 00:15:30.807 { 00:15:30.807 "method": "nvmf_create_subsystem", 00:15:30.807 "params": { 00:15:30.807 "allow_any_host": false, 00:15:30.807 "ana_reporting": false, 00:15:30.807 "max_cntlid": 65519, 00:15:30.807 "max_namespaces": 32, 00:15:30.807 "min_cntlid": 1, 00:15:30.807 "model_number": "SPDK bdev Controller", 00:15:30.807 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.807 "serial_number": "00000000000000000000" 00:15:30.807 } 00:15:30.807 }, 00:15:30.807 { 00:15:30.807 "method": "nvmf_subsystem_add_host", 00:15:30.807 "params": { 00:15:30.807 "host": "nqn.2016-06.io.spdk:host1", 00:15:30.807 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.807 "psk": "key0" 00:15:30.807 } 00:15:30.807 }, 00:15:30.807 { 00:15:30.807 "method": "nvmf_subsystem_add_ns", 00:15:30.807 "params": { 00:15:30.807 "namespace": { 00:15:30.807 "bdev_name": "malloc0", 00:15:30.807 "nguid": "2465F632CFAB4F0CB7D6E91F17990A69", 00:15:30.807 "no_auto_visible": false, 00:15:30.807 "nsid": 1, 00:15:30.807 "uuid": "2465f632-cfab-4f0c-b7d6-e91f17990a69" 00:15:30.807 }, 00:15:30.807 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:30.807 } 00:15:30.807 }, 00:15:30.807 { 00:15:30.807 "method": "nvmf_subsystem_add_listener", 00:15:30.807 "params": { 00:15:30.807 "listen_address": { 00:15:30.807 "adrfam": "IPv4", 00:15:30.807 "traddr": "10.0.0.2", 00:15:30.807 "trsvcid": "4420", 00:15:30.807 "trtype": "TCP" 00:15:30.807 }, 00:15:30.807 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.807 "secure_channel": false, 00:15:30.807 "sock_impl": "ssl" 00:15:30.807 } 00:15:30.807 } 00:15:30.807 ] 00:15:30.807 } 00:15:30.807 ] 00:15:30.807 }' 00:15:30.807 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84748 00:15:30.807 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84748 00:15:30.807 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:30.807 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84748 ']' 00:15:30.807 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.807 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.807 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.807 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.807 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.807 [2024-07-15 18:34:53.229540] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:30.807 [2024-07-15 18:34:53.229629] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.807 [2024-07-15 18:34:53.355249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.077 [2024-07-15 18:34:53.448576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.077 [2024-07-15 18:34:53.448628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.077 [2024-07-15 18:34:53.448638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.077 [2024-07-15 18:34:53.448646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.077 [2024-07-15 18:34:53.448653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.077 [2024-07-15 18:34:53.448735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.077 [2024-07-15 18:34:53.661942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.336 [2024-07-15 18:34:53.693839] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:31.336 [2024-07-15 18:34:53.694005] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=84792 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 84792 /var/tmp/bdevperf.sock 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84792 ']' 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:31.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.596 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:15:31.596 "subsystems": [ 00:15:31.596 { 00:15:31.596 "subsystem": "keyring", 00:15:31.596 "config": [ 00:15:31.596 { 00:15:31.596 "method": "keyring_file_add_key", 00:15:31.596 "params": { 00:15:31.596 "name": "key0", 00:15:31.596 "path": "/tmp/tmp.Ukncj92f3M" 00:15:31.596 } 00:15:31.596 } 00:15:31.596 ] 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "subsystem": "iobuf", 00:15:31.596 "config": [ 00:15:31.596 { 00:15:31.596 "method": "iobuf_set_options", 00:15:31.596 "params": { 00:15:31.596 "large_bufsize": 135168, 00:15:31.596 "large_pool_count": 1024, 00:15:31.596 "small_bufsize": 8192, 00:15:31.596 "small_pool_count": 8192 00:15:31.596 } 00:15:31.596 } 00:15:31.596 ] 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "subsystem": "sock", 00:15:31.596 "config": [ 00:15:31.596 { 00:15:31.596 "method": "sock_set_default_impl", 00:15:31.596 "params": { 00:15:31.596 "impl_name": "posix" 00:15:31.596 } 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "method": "sock_impl_set_options", 00:15:31.596 "params": { 00:15:31.596 "enable_ktls": false, 00:15:31.596 "enable_placement_id": 0, 00:15:31.596 "enable_quickack": false, 00:15:31.596 "enable_recv_pipe": true, 00:15:31.596 "enable_zerocopy_send_client": false, 00:15:31.596 "enable_zerocopy_send_server": true, 00:15:31.596 "impl_name": "ssl", 00:15:31.596 "recv_buf_size": 4096, 00:15:31.596 "send_buf_size": 4096, 00:15:31.596 "tls_version": 0, 00:15:31.596 "zerocopy_threshold": 0 00:15:31.596 } 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "method": "sock_impl_set_options", 00:15:31.596 "params": { 00:15:31.596 "enable_ktls": false, 00:15:31.596 "enable_placement_id": 0, 00:15:31.596 "enable_quickack": false, 00:15:31.596 "enable_recv_pipe": true, 00:15:31.596 "enable_zerocopy_send_client": false, 00:15:31.596 "enable_zerocopy_send_server": true, 00:15:31.596 "impl_name": "posix", 00:15:31.596 "recv_buf_size": 2097152, 00:15:31.596 "send_buf_size": 2097152, 00:15:31.596 "tls_version": 0, 00:15:31.596 "zerocopy_threshold": 0 00:15:31.596 } 00:15:31.596 } 00:15:31.596 ] 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "subsystem": "vmd", 00:15:31.596 "config": [] 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "subsystem": "accel", 00:15:31.596 "config": [ 00:15:31.596 { 00:15:31.596 "method": "accel_set_options", 00:15:31.596 "params": { 00:15:31.596 "buf_count": 2048, 00:15:31.596 "large_cache_size": 16, 00:15:31.596 "sequence_count": 2048, 00:15:31.596 "small_cache_size": 128, 00:15:31.596 "task_count": 2048 00:15:31.596 } 00:15:31.596 } 00:15:31.596 ] 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "subsystem": "bdev", 00:15:31.596 "config": [ 00:15:31.596 { 00:15:31.596 "method": "bdev_set_options", 00:15:31.596 "params": { 00:15:31.596 "bdev_auto_examine": true, 00:15:31.596 "bdev_io_cache_size": 256, 00:15:31.596 "bdev_io_pool_size": 65535, 00:15:31.596 "iobuf_large_cache_size": 16, 00:15:31.596 "iobuf_small_cache_size": 128 00:15:31.596 } 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "method": "bdev_raid_set_options", 00:15:31.596 "params": { 00:15:31.596 "process_window_size_kb": 1024 00:15:31.596 } 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "method": "bdev_iscsi_set_options", 00:15:31.596 "params": { 00:15:31.596 "timeout_sec": 30 00:15:31.596 } 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "method": "bdev_nvme_set_options", 00:15:31.596 "params": { 00:15:31.596 "action_on_timeout": "none", 00:15:31.596 "allow_accel_sequence": false, 00:15:31.596 "arbitration_burst": 0, 00:15:31.596 "bdev_retry_count": 3, 00:15:31.596 "ctrlr_loss_timeout_sec": 0, 00:15:31.596 "delay_cmd_submit": true, 00:15:31.596 "dhchap_dhgroups": [ 00:15:31.596 "null", 00:15:31.596 "ffdhe2048", 00:15:31.596 "ffdhe3072", 00:15:31.596 "ffdhe4096", 00:15:31.596 "ffdhe6144", 00:15:31.596 "ffdhe8192" 00:15:31.596 ], 00:15:31.596 "dhchap_digests": [ 00:15:31.596 "sha256", 00:15:31.596 "sha384", 00:15:31.596 "sha512" 00:15:31.596 ], 00:15:31.596 "disable_auto_failback": false, 00:15:31.596 "fast_io_fail_timeout_sec": 0, 00:15:31.596 "generate_uuids": false, 00:15:31.596 "high_priority_weight": 0, 00:15:31.596 "io_path_stat": false, 00:15:31.596 "io_queue_requests": 512, 00:15:31.596 "keep_alive_timeout_ms": 10000, 00:15:31.596 "low_priority_weight": 0, 00:15:31.596 "medium_priority_weight": 0, 00:15:31.596 "nvme_adminq_poll_period_us": 10000, 00:15:31.596 "nvme_error_stat": false, 00:15:31.596 "nvme_ioq_poll_period_us": 0, 00:15:31.596 "rdma_cm_event_timeout_ms": 0, 00:15:31.596 "rdma_max_cq_size": 0, 00:15:31.596 "rdma_srq_size": 0, 00:15:31.596 "reconnect_delay_sec": 0, 00:15:31.596 "timeout_admin_us": 0, 00:15:31.596 "timeout_us": 0, 00:15:31.596 "transport_ack_timeout": 0, 00:15:31.596 "transport_retry_count": 4, 00:15:31.596 "transport_tos": 0 00:15:31.596 } 00:15:31.596 }, 00:15:31.596 { 00:15:31.596 "method": "bdev_nvme_attach_controller", 00:15:31.596 "params": { 00:15:31.596 "adrfam": "IPv4", 00:15:31.596 "ctrlr_loss_timeout_sec": 0, 00:15:31.596 "ddgst": false, 00:15:31.596 "fast_io_fail_timeout_sec": 0, 00:15:31.596 "hdgst": false, 00:15:31.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.596 "name": "nvme0", 00:15:31.596 "prchk_guard": false, 00:15:31.596 "prchk_reftag": false, 00:15:31.596 "psk": "key0", 00:15:31.596 "reconnect_delay_sec": 0, 00:15:31.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.596 "traddr": "10.0.0.2", 00:15:31.596 "trsvcid": "4420", 00:15:31.596 "trtype": "TCP" 00:15:31.596 } 00:15:31.596 }, 00:15:31.597 { 00:15:31.597 "method": "bdev_nvme_set_hotplug", 00:15:31.597 "params": { 00:15:31.597 "enable": false, 00:15:31.597 "period_us": 100000 00:15:31.597 } 00:15:31.597 }, 00:15:31.597 { 00:15:31.597 "method": "bdev_enable_histogram", 00:15:31.597 "params": { 00:15:31.597 "enable": true, 00:15:31.597 "name": "nvme0n1" 00:15:31.597 } 00:15:31.597 }, 00:15:31.597 { 00:15:31.597 "method": "bdev_wait_for_examine" 00:15:31.597 } 00:15:31.597 ] 00:15:31.597 }, 00:15:31.597 { 00:15:31.597 "subsystem": "nbd", 00:15:31.597 "config": [] 00:15:31.597 } 00:15:31.597 ] 00:15:31.597 }' 00:15:31.597 [2024-07-15 18:34:54.185540] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:31.597 [2024-07-15 18:34:54.185624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84792 ] 00:15:31.856 [2024-07-15 18:34:54.327064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.856 [2024-07-15 18:34:54.420898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.114 [2024-07-15 18:34:54.574509] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:32.681 18:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.681 18:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:32.681 18:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:32.681 18:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:15:32.681 18:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.681 18:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:32.939 Running I/O for 1 seconds... 00:15:33.871 00:15:33.871 Latency(us) 00:15:33.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.871 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:33.871 Verification LBA range: start 0x0 length 0x2000 00:15:33.871 nvme0n1 : 1.01 5772.59 22.55 0.00 0.00 22007.93 4763.86 16634.04 00:15:33.871 =================================================================================================================== 00:15:33.871 Total : 5772.59 22.55 0.00 0.00 22007.93 4763.86 16634.04 00:15:33.871 0 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:33.871 nvmf_trace.0 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 84792 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84792 ']' 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84792 00:15:33.871 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:34.131 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.131 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84792 00:15:34.131 killing process with pid 84792 00:15:34.131 Received shutdown signal, test time was about 1.000000 seconds 00:15:34.131 00:15:34.131 Latency(us) 00:15:34.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.131 =================================================================================================================== 00:15:34.131 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:34.131 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:34.131 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:34.131 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84792' 00:15:34.131 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84792 00:15:34.131 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84792 00:15:34.131 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:34.131 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.131 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.389 rmmod nvme_tcp 00:15:34.389 rmmod nvme_fabrics 00:15:34.389 rmmod nvme_keyring 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 84748 ']' 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 84748 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84748 ']' 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84748 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84748 00:15:34.389 killing process with pid 84748 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84748' 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84748 00:15:34.389 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84748 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tnWw9LMegD /tmp/tmp.IL7iignecZ /tmp/tmp.Ukncj92f3M 00:15:34.647 00:15:34.647 real 1m21.183s 00:15:34.647 user 2m1.674s 00:15:34.647 sys 0m30.176s 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.647 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.647 ************************************ 00:15:34.647 END TEST nvmf_tls 00:15:34.647 ************************************ 00:15:34.647 18:34:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:34.647 18:34:57 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:34.647 18:34:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:34.647 18:34:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.647 18:34:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:34.647 ************************************ 00:15:34.647 START TEST nvmf_fips 00:15:34.647 ************************************ 00:15:34.647 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:34.906 * Looking for test storage... 00:15:34.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:34.906 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:34.907 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:34.907 Error setting digest 00:15:34.907 008229F80D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:34.907 008229F80D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:35.165 Cannot find device "nvmf_tgt_br" 00:15:35.165 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.166 Cannot find device "nvmf_tgt_br2" 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:35.166 Cannot find device "nvmf_tgt_br" 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:35.166 Cannot find device "nvmf_tgt_br2" 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:35.166 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:35.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:35.424 00:15:35.424 --- 10.0.0.2 ping statistics --- 00:15:35.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.424 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:35.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:15:35.424 00:15:35.424 --- 10.0.0.3 ping statistics --- 00:15:35.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.424 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:35.424 00:15:35.424 --- 10.0.0.1 ping statistics --- 00:15:35.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.424 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85078 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85078 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85078 ']' 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.424 18:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:35.424 [2024-07-15 18:34:58.022432] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:35.424 [2024-07-15 18:34:58.022516] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.682 [2024-07-15 18:34:58.162951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.682 [2024-07-15 18:34:58.258279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.682 [2024-07-15 18:34:58.258324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.682 [2024-07-15 18:34:58.258333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.682 [2024-07-15 18:34:58.258341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.682 [2024-07-15 18:34:58.258349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.682 [2024-07-15 18:34:58.258373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.250 18:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.250 18:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:36.250 18:34:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.250 18:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.250 18:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:36.508 18:34:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.508 18:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:36.508 18:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:36.508 18:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:36.508 18:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:36.508 18:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:36.508 18:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:36.508 18:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:36.508 18:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.508 [2024-07-15 18:34:59.088530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.508 [2024-07-15 18:34:59.104438] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:36.508 [2024-07-15 18:34:59.104598] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.766 [2024-07-15 18:34:59.133225] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:36.766 malloc0 00:15:36.766 18:34:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:36.766 18:34:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85130 00:15:36.766 18:34:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:36.766 18:34:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85130 /var/tmp/bdevperf.sock 00:15:36.766 18:34:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85130 ']' 00:15:36.766 18:34:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:36.766 18:34:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:36.766 18:34:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:36.766 18:34:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.766 18:34:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:36.766 [2024-07-15 18:34:59.238032] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:36.766 [2024-07-15 18:34:59.238099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85130 ] 00:15:36.766 [2024-07-15 18:34:59.379141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.025 [2024-07-15 18:34:59.470090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.598 18:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.598 18:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:37.598 18:35:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:37.856 [2024-07-15 18:35:00.240765] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:37.856 [2024-07-15 18:35:00.240860] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:37.856 TLSTESTn1 00:15:37.856 18:35:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:37.856 Running I/O for 10 seconds... 00:15:47.881 00:15:47.881 Latency(us) 00:15:47.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.881 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:47.881 Verification LBA range: start 0x0 length 0x2000 00:15:47.881 TLSTESTn1 : 10.01 5577.00 21.79 0.00 0.00 22916.29 4658.58 19581.84 00:15:47.881 =================================================================================================================== 00:15:47.881 Total : 5577.00 21.79 0.00 0.00 22916.29 4658.58 19581.84 00:15:47.881 0 00:15:47.881 18:35:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:47.881 18:35:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:47.881 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:15:47.881 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:15:47.881 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:47.881 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:47.881 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:47.881 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:47.881 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:47.881 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:47.881 nvmf_trace.0 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85130 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85130 ']' 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85130 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85130 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85130' 00:15:48.139 killing process with pid 85130 00:15:48.139 Received shutdown signal, test time was about 10.000000 seconds 00:15:48.139 00:15:48.139 Latency(us) 00:15:48.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.139 =================================================================================================================== 00:15:48.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85130 00:15:48.139 [2024-07-15 18:35:10.574035] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:48.139 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85130 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.434 rmmod nvme_tcp 00:15:48.434 rmmod nvme_fabrics 00:15:48.434 rmmod nvme_keyring 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85078 ']' 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85078 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85078 ']' 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85078 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85078 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:48.434 killing process with pid 85078 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85078' 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85078 00:15:48.434 [2024-07-15 18:35:10.897318] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:48.434 18:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85078 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:48.692 00:15:48.692 real 0m13.976s 00:15:48.692 user 0m17.816s 00:15:48.692 sys 0m6.123s 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:48.692 ************************************ 00:15:48.692 END TEST nvmf_fips 00:15:48.692 18:35:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:48.692 ************************************ 00:15:48.692 18:35:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:48.692 18:35:11 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:15:48.692 18:35:11 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:15:48.692 18:35:11 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:48.692 18:35:11 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:48.692 18:35:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.692 18:35:11 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:48.692 18:35:11 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:48.693 18:35:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.693 18:35:11 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:15:48.693 18:35:11 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:15:48.693 18:35:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:48.693 18:35:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.693 18:35:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.693 ************************************ 00:15:48.693 START TEST nvmf_multicontroller 00:15:48.693 ************************************ 00:15:48.693 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:15:48.952 * Looking for test storage... 00:15:48.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.952 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:48.953 Cannot find device "nvmf_tgt_br" 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.953 Cannot find device "nvmf_tgt_br2" 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:48.953 Cannot find device "nvmf_tgt_br" 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:48.953 Cannot find device "nvmf_tgt_br2" 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:15:48.953 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:49.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:15:49.213 00:15:49.213 --- 10.0.0.2 ping statistics --- 00:15:49.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.213 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:49.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:49.213 00:15:49.213 --- 10.0.0.3 ping statistics --- 00:15:49.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.213 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:49.213 00:15:49.213 --- 10.0.0.1 ping statistics --- 00:15:49.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.213 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.213 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85503 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85503 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85503 ']' 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:49.472 18:35:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:49.472 [2024-07-15 18:35:11.912791] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:49.472 [2024-07-15 18:35:11.912859] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.472 [2024-07-15 18:35:12.044727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:49.730 [2024-07-15 18:35:12.139525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.730 [2024-07-15 18:35:12.139574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.730 [2024-07-15 18:35:12.139584] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.730 [2024-07-15 18:35:12.139592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.730 [2024-07-15 18:35:12.139599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.730 [2024-07-15 18:35:12.139777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.730 [2024-07-15 18:35:12.140648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.730 [2024-07-15 18:35:12.140649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.298 [2024-07-15 18:35:12.848066] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.298 Malloc0 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.298 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 [2024-07-15 18:35:12.914115] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 [2024-07-15 18:35:12.926052] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 Malloc1 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:50.558 18:35:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.558 18:35:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85558 00:15:50.558 18:35:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:15:50.558 18:35:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:50.558 18:35:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85558 /var/tmp/bdevperf.sock 00:15:50.558 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85558 ']' 00:15:50.558 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.558 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.558 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.558 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.558 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.494 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.494 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:15:51.494 18:35:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:15:51.494 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.494 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.494 NVMe0n1 00:15:51.494 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.494 18:35:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:51.494 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.494 18:35:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:15:51.494 18:35:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.494 1 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.494 2024/07/15 18:35:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:51.494 request: 00:15:51.494 { 00:15:51.494 "method": "bdev_nvme_attach_controller", 00:15:51.494 "params": { 00:15:51.494 "name": "NVMe0", 00:15:51.494 "trtype": "tcp", 00:15:51.494 "traddr": "10.0.0.2", 00:15:51.494 "adrfam": "ipv4", 00:15:51.494 "trsvcid": "4420", 00:15:51.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.494 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:15:51.494 "hostaddr": "10.0.0.2", 00:15:51.494 "hostsvcid": "60000", 00:15:51.494 "prchk_reftag": false, 00:15:51.494 "prchk_guard": false, 00:15:51.494 "hdgst": false, 00:15:51.494 "ddgst": false 00:15:51.494 } 00:15:51.494 } 00:15:51.494 Got JSON-RPC error response 00:15:51.494 GoRPCClient: error on JSON-RPC call 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.494 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.494 2024/07/15 18:35:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:51.494 request: 00:15:51.494 { 00:15:51.494 "method": "bdev_nvme_attach_controller", 00:15:51.494 "params": { 00:15:51.495 "name": "NVMe0", 00:15:51.495 "trtype": "tcp", 00:15:51.495 "traddr": "10.0.0.2", 00:15:51.495 "adrfam": "ipv4", 00:15:51.495 "trsvcid": "4420", 00:15:51.495 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:51.495 "hostaddr": "10.0.0.2", 00:15:51.495 "hostsvcid": "60000", 00:15:51.495 "prchk_reftag": false, 00:15:51.495 "prchk_guard": false, 00:15:51.495 "hdgst": false, 00:15:51.495 "ddgst": false 00:15:51.495 } 00:15:51.495 } 00:15:51.495 Got JSON-RPC error response 00:15:51.495 GoRPCClient: error on JSON-RPC call 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.495 2024/07/15 18:35:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:15:51.495 request: 00:15:51.495 { 00:15:51.495 "method": "bdev_nvme_attach_controller", 00:15:51.495 "params": { 00:15:51.495 "name": "NVMe0", 00:15:51.495 "trtype": "tcp", 00:15:51.495 "traddr": "10.0.0.2", 00:15:51.495 "adrfam": "ipv4", 00:15:51.495 "trsvcid": "4420", 00:15:51.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.495 "hostaddr": "10.0.0.2", 00:15:51.495 "hostsvcid": "60000", 00:15:51.495 "prchk_reftag": false, 00:15:51.495 "prchk_guard": false, 00:15:51.495 "hdgst": false, 00:15:51.495 "ddgst": false, 00:15:51.495 "multipath": "disable" 00:15:51.495 } 00:15:51.495 } 00:15:51.495 Got JSON-RPC error response 00:15:51.495 GoRPCClient: error on JSON-RPC call 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.495 2024/07/15 18:35:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:51.495 request: 00:15:51.495 { 00:15:51.495 "method": "bdev_nvme_attach_controller", 00:15:51.495 "params": { 00:15:51.495 "name": "NVMe0", 00:15:51.495 "trtype": "tcp", 00:15:51.495 "traddr": "10.0.0.2", 00:15:51.495 "adrfam": "ipv4", 00:15:51.495 "trsvcid": "4420", 00:15:51.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.495 "hostaddr": "10.0.0.2", 00:15:51.495 "hostsvcid": "60000", 00:15:51.495 "prchk_reftag": false, 00:15:51.495 "prchk_guard": false, 00:15:51.495 "hdgst": false, 00:15:51.495 "ddgst": false, 00:15:51.495 "multipath": "failover" 00:15:51.495 } 00:15:51.495 } 00:15:51.495 Got JSON-RPC error response 00:15:51.495 GoRPCClient: error on JSON-RPC call 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.495 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.754 00:15:51.754 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.755 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:15:51.755 18:35:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:53.130 0 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85558 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85558 ']' 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85558 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85558 00:15:53.130 killing process with pid 85558 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85558' 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85558 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85558 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:15:53.130 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:15:53.130 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:15:53.130 [2024-07-15 18:35:13.053903] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:53.130 [2024-07-15 18:35:13.053996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85558 ] 00:15:53.130 [2024-07-15 18:35:13.193211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.130 [2024-07-15 18:35:13.288097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.130 [2024-07-15 18:35:14.243963] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name cbecc7a4-c0f3-4b12-8eb6-65d13ca0ee5e already exists 00:15:53.130 [2024-07-15 18:35:14.244069] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:cbecc7a4-c0f3-4b12-8eb6-65d13ca0ee5e alias for bdev NVMe1n1 00:15:53.130 [2024-07-15 18:35:14.244084] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:15:53.130 Running I/O for 1 seconds... 00:15:53.130 00:15:53.130 Latency(us) 00:15:53.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.130 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:15:53.130 NVMe0n1 : 1.00 25998.50 101.56 0.00 0.00 4916.78 2171.37 9948.84 00:15:53.130 =================================================================================================================== 00:15:53.131 Total : 25998.50 101.56 0.00 0.00 4916.78 2171.37 9948.84 00:15:53.131 Received shutdown signal, test time was about 1.000000 seconds 00:15:53.131 00:15:53.131 Latency(us) 00:15:53.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.131 =================================================================================================================== 00:15:53.131 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.131 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:15:53.131 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:53.131 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:15:53.131 18:35:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:15:53.131 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:53.131 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:15:53.131 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:53.131 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:15:53.131 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:53.131 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:53.131 rmmod nvme_tcp 00:15:53.131 rmmod nvme_fabrics 00:15:53.389 rmmod nvme_keyring 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85503 ']' 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85503 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85503 ']' 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85503 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85503 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:53.389 killing process with pid 85503 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85503' 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85503 00:15:53.389 18:35:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85503 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:53.647 00:15:53.647 real 0m4.818s 00:15:53.647 user 0m14.526s 00:15:53.647 sys 0m1.222s 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.647 18:35:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:53.647 ************************************ 00:15:53.647 END TEST nvmf_multicontroller 00:15:53.647 ************************************ 00:15:53.647 18:35:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.647 18:35:16 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:15:53.647 18:35:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.647 18:35:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.647 18:35:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.647 ************************************ 00:15:53.647 START TEST nvmf_aer 00:15:53.647 ************************************ 00:15:53.647 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:15:53.905 * Looking for test storage... 00:15:53.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.905 18:35:16 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:53.906 Cannot find device "nvmf_tgt_br" 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.906 Cannot find device "nvmf_tgt_br2" 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:53.906 Cannot find device "nvmf_tgt_br" 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:53.906 Cannot find device "nvmf_tgt_br2" 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.906 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:54.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:15:54.166 00:15:54.166 --- 10.0.0.2 ping statistics --- 00:15:54.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.166 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:54.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:54.166 00:15:54.166 --- 10.0.0.3 ping statistics --- 00:15:54.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.166 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:54.166 00:15:54.166 --- 10.0.0.1 ping statistics --- 00:15:54.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.166 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=85801 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 85801 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 85801 ']' 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:54.166 18:35:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:54.166 [2024-07-15 18:35:16.744851] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:54.166 [2024-07-15 18:35:16.744921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.424 [2024-07-15 18:35:16.887740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.424 [2024-07-15 18:35:16.967432] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.424 [2024-07-15 18:35:16.967486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.424 [2024-07-15 18:35:16.967496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.424 [2024-07-15 18:35:16.967504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.424 [2024-07-15 18:35:16.967511] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.424 [2024-07-15 18:35:16.967652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.424 [2024-07-15 18:35:16.967900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.424 [2024-07-15 18:35:16.968636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.424 [2024-07-15 18:35:16.968637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.990 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.990 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:15:54.990 18:35:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:54.990 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.990 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.249 [2024-07-15 18:35:17.645134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.249 Malloc0 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.249 [2024-07-15 18:35:17.715675] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.249 [ 00:15:55.249 { 00:15:55.249 "allow_any_host": true, 00:15:55.249 "hosts": [], 00:15:55.249 "listen_addresses": [], 00:15:55.249 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:55.249 "subtype": "Discovery" 00:15:55.249 }, 00:15:55.249 { 00:15:55.249 "allow_any_host": true, 00:15:55.249 "hosts": [], 00:15:55.249 "listen_addresses": [ 00:15:55.249 { 00:15:55.249 "adrfam": "IPv4", 00:15:55.249 "traddr": "10.0.0.2", 00:15:55.249 "trsvcid": "4420", 00:15:55.249 "trtype": "TCP" 00:15:55.249 } 00:15:55.249 ], 00:15:55.249 "max_cntlid": 65519, 00:15:55.249 "max_namespaces": 2, 00:15:55.249 "min_cntlid": 1, 00:15:55.249 "model_number": "SPDK bdev Controller", 00:15:55.249 "namespaces": [ 00:15:55.249 { 00:15:55.249 "bdev_name": "Malloc0", 00:15:55.249 "name": "Malloc0", 00:15:55.249 "nguid": "27474954C673424EB4DCD574DEA5595D", 00:15:55.249 "nsid": 1, 00:15:55.249 "uuid": "27474954-c673-424e-b4dc-d574dea5595d" 00:15:55.249 } 00:15:55.249 ], 00:15:55.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.249 "serial_number": "SPDK00000000000001", 00:15:55.249 "subtype": "NVMe" 00:15:55.249 } 00:15:55.249 ] 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=85857 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:15:55.249 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:55.508 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:55.508 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:55.508 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:15:55.508 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:15:55.508 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.508 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.508 Malloc1 00:15:55.508 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.508 18:35:17 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:15:55.508 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.508 18:35:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.508 Asynchronous Event Request test 00:15:55.508 Attaching to 10.0.0.2 00:15:55.508 Attached to 10.0.0.2 00:15:55.508 Registering asynchronous event callbacks... 00:15:55.508 Starting namespace attribute notice tests for all controllers... 00:15:55.508 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:55.508 aer_cb - Changed Namespace 00:15:55.508 Cleaning up... 00:15:55.508 [ 00:15:55.508 { 00:15:55.508 "allow_any_host": true, 00:15:55.508 "hosts": [], 00:15:55.508 "listen_addresses": [], 00:15:55.508 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:55.508 "subtype": "Discovery" 00:15:55.508 }, 00:15:55.508 { 00:15:55.508 "allow_any_host": true, 00:15:55.508 "hosts": [], 00:15:55.508 "listen_addresses": [ 00:15:55.508 { 00:15:55.508 "adrfam": "IPv4", 00:15:55.508 "traddr": "10.0.0.2", 00:15:55.508 "trsvcid": "4420", 00:15:55.508 "trtype": "TCP" 00:15:55.508 } 00:15:55.508 ], 00:15:55.508 "max_cntlid": 65519, 00:15:55.508 "max_namespaces": 2, 00:15:55.508 "min_cntlid": 1, 00:15:55.508 "model_number": "SPDK bdev Controller", 00:15:55.508 "namespaces": [ 00:15:55.508 { 00:15:55.508 "bdev_name": "Malloc0", 00:15:55.508 "name": "Malloc0", 00:15:55.508 "nguid": "27474954C673424EB4DCD574DEA5595D", 00:15:55.508 "nsid": 1, 00:15:55.508 "uuid": "27474954-c673-424e-b4dc-d574dea5595d" 00:15:55.508 }, 00:15:55.508 { 00:15:55.508 "bdev_name": "Malloc1", 00:15:55.508 "name": "Malloc1", 00:15:55.508 "nguid": "7534D1EC59B543E3BBBC703C2CD15E2E", 00:15:55.508 "nsid": 2, 00:15:55.508 "uuid": "7534d1ec-59b5-43e3-bbbc-703c2cd15e2e" 00:15:55.508 } 00:15:55.508 ], 00:15:55.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.508 "serial_number": "SPDK00000000000001", 00:15:55.508 "subtype": "NVMe" 00:15:55.508 } 00:15:55.508 ] 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 85857 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:55.508 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.767 rmmod nvme_tcp 00:15:55.767 rmmod nvme_fabrics 00:15:55.767 rmmod nvme_keyring 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 85801 ']' 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 85801 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 85801 ']' 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 85801 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85801 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.767 killing process with pid 85801 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85801' 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 85801 00:15:55.767 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 85801 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:56.026 00:15:56.026 real 0m2.305s 00:15:56.026 user 0m5.943s 00:15:56.026 sys 0m0.739s 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:56.026 18:35:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:56.026 ************************************ 00:15:56.026 END TEST nvmf_aer 00:15:56.026 ************************************ 00:15:56.026 18:35:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:56.026 18:35:18 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:15:56.026 18:35:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:56.026 18:35:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.026 18:35:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:56.026 ************************************ 00:15:56.026 START TEST nvmf_async_init 00:15:56.026 ************************************ 00:15:56.026 18:35:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:15:56.286 * Looking for test storage... 00:15:56.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:56.286 18:35:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.286 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:15:56.286 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.286 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.286 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.286 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.286 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.286 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.286 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7f7d91d1925545d392c3df171c2d5bcd 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:56.287 Cannot find device "nvmf_tgt_br" 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.287 Cannot find device "nvmf_tgt_br2" 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:56.287 Cannot find device "nvmf_tgt_br" 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:56.287 Cannot find device "nvmf_tgt_br2" 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.287 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.564 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.564 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.564 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.564 18:35:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:56.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:15:56.564 00:15:56.564 --- 10.0.0.2 ping statistics --- 00:15:56.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.564 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:56.564 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:56.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:15:56.824 00:15:56.824 --- 10.0.0.3 ping statistics --- 00:15:56.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.824 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:15:56.824 00:15:56.824 --- 10.0.0.1 ping statistics --- 00:15:56.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.824 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86031 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86031 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86031 ']' 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.824 18:35:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:56.824 [2024-07-15 18:35:19.285894] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:56.824 [2024-07-15 18:35:19.285964] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.824 [2024-07-15 18:35:19.428046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.083 [2024-07-15 18:35:19.512912] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.083 [2024-07-15 18:35:19.512965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.083 [2024-07-15 18:35:19.512975] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.083 [2024-07-15 18:35:19.512983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.083 [2024-07-15 18:35:19.512990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.083 [2024-07-15 18:35:19.513014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 [2024-07-15 18:35:20.187181] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 null0 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:57.649 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7f7d91d1925545d392c3df171c2d5bcd 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:57.650 [2024-07-15 18:35:20.227198] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.650 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:57.908 nvme0n1 00:15:57.908 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.908 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:15:57.908 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.908 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:57.908 [ 00:15:57.908 { 00:15:57.908 "aliases": [ 00:15:57.908 "7f7d91d1-9255-45d3-92c3-df171c2d5bcd" 00:15:57.908 ], 00:15:57.908 "assigned_rate_limits": { 00:15:57.908 "r_mbytes_per_sec": 0, 00:15:57.908 "rw_ios_per_sec": 0, 00:15:57.908 "rw_mbytes_per_sec": 0, 00:15:57.908 "w_mbytes_per_sec": 0 00:15:57.908 }, 00:15:57.908 "block_size": 512, 00:15:57.908 "claimed": false, 00:15:57.908 "driver_specific": { 00:15:57.908 "mp_policy": "active_passive", 00:15:57.908 "nvme": [ 00:15:57.909 { 00:15:57.909 "ctrlr_data": { 00:15:57.909 "ana_reporting": false, 00:15:57.909 "cntlid": 1, 00:15:57.909 "firmware_revision": "24.09", 00:15:57.909 "model_number": "SPDK bdev Controller", 00:15:57.909 "multi_ctrlr": true, 00:15:57.909 "oacs": { 00:15:57.909 "firmware": 0, 00:15:57.909 "format": 0, 00:15:57.909 "ns_manage": 0, 00:15:57.909 "security": 0 00:15:57.909 }, 00:15:57.909 "serial_number": "00000000000000000000", 00:15:57.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:57.909 "vendor_id": "0x8086" 00:15:57.909 }, 00:15:57.909 "ns_data": { 00:15:57.909 "can_share": true, 00:15:57.909 "id": 1 00:15:57.909 }, 00:15:57.909 "trid": { 00:15:57.909 "adrfam": "IPv4", 00:15:57.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:57.909 "traddr": "10.0.0.2", 00:15:57.909 "trsvcid": "4420", 00:15:57.909 "trtype": "TCP" 00:15:57.909 }, 00:15:57.909 "vs": { 00:15:57.909 "nvme_version": "1.3" 00:15:57.909 } 00:15:57.909 } 00:15:57.909 ] 00:15:57.909 }, 00:15:57.909 "memory_domains": [ 00:15:57.909 { 00:15:57.909 "dma_device_id": "system", 00:15:57.909 "dma_device_type": 1 00:15:57.909 } 00:15:57.909 ], 00:15:57.909 "name": "nvme0n1", 00:15:57.909 "num_blocks": 2097152, 00:15:57.909 "product_name": "NVMe disk", 00:15:57.909 "supported_io_types": { 00:15:57.909 "abort": true, 00:15:57.909 "compare": true, 00:15:57.909 "compare_and_write": true, 00:15:57.909 "copy": true, 00:15:57.909 "flush": true, 00:15:57.909 "get_zone_info": false, 00:15:57.909 "nvme_admin": true, 00:15:57.909 "nvme_io": true, 00:15:57.909 "nvme_io_md": false, 00:15:57.909 "nvme_iov_md": false, 00:15:57.909 "read": true, 00:15:57.909 "reset": true, 00:15:57.909 "seek_data": false, 00:15:57.909 "seek_hole": false, 00:15:57.909 "unmap": false, 00:15:57.909 "write": true, 00:15:57.909 "write_zeroes": true, 00:15:57.909 "zcopy": false, 00:15:57.909 "zone_append": false, 00:15:57.909 "zone_management": false 00:15:57.909 }, 00:15:57.909 "uuid": "7f7d91d1-9255-45d3-92c3-df171c2d5bcd", 00:15:57.909 "zoned": false 00:15:57.909 } 00:15:57.909 ] 00:15:57.909 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.909 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:15:57.909 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.909 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:57.909 [2024-07-15 18:35:20.498934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:57.909 [2024-07-15 18:35:20.499012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2483a30 (9): Bad file descriptor 00:15:58.202 [2024-07-15 18:35:20.671688] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:58.203 [ 00:15:58.203 { 00:15:58.203 "aliases": [ 00:15:58.203 "7f7d91d1-9255-45d3-92c3-df171c2d5bcd" 00:15:58.203 ], 00:15:58.203 "assigned_rate_limits": { 00:15:58.203 "r_mbytes_per_sec": 0, 00:15:58.203 "rw_ios_per_sec": 0, 00:15:58.203 "rw_mbytes_per_sec": 0, 00:15:58.203 "w_mbytes_per_sec": 0 00:15:58.203 }, 00:15:58.203 "block_size": 512, 00:15:58.203 "claimed": false, 00:15:58.203 "driver_specific": { 00:15:58.203 "mp_policy": "active_passive", 00:15:58.203 "nvme": [ 00:15:58.203 { 00:15:58.203 "ctrlr_data": { 00:15:58.203 "ana_reporting": false, 00:15:58.203 "cntlid": 2, 00:15:58.203 "firmware_revision": "24.09", 00:15:58.203 "model_number": "SPDK bdev Controller", 00:15:58.203 "multi_ctrlr": true, 00:15:58.203 "oacs": { 00:15:58.203 "firmware": 0, 00:15:58.203 "format": 0, 00:15:58.203 "ns_manage": 0, 00:15:58.203 "security": 0 00:15:58.203 }, 00:15:58.203 "serial_number": "00000000000000000000", 00:15:58.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:58.203 "vendor_id": "0x8086" 00:15:58.203 }, 00:15:58.203 "ns_data": { 00:15:58.203 "can_share": true, 00:15:58.203 "id": 1 00:15:58.203 }, 00:15:58.203 "trid": { 00:15:58.203 "adrfam": "IPv4", 00:15:58.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:58.203 "traddr": "10.0.0.2", 00:15:58.203 "trsvcid": "4420", 00:15:58.203 "trtype": "TCP" 00:15:58.203 }, 00:15:58.203 "vs": { 00:15:58.203 "nvme_version": "1.3" 00:15:58.203 } 00:15:58.203 } 00:15:58.203 ] 00:15:58.203 }, 00:15:58.203 "memory_domains": [ 00:15:58.203 { 00:15:58.203 "dma_device_id": "system", 00:15:58.203 "dma_device_type": 1 00:15:58.203 } 00:15:58.203 ], 00:15:58.203 "name": "nvme0n1", 00:15:58.203 "num_blocks": 2097152, 00:15:58.203 "product_name": "NVMe disk", 00:15:58.203 "supported_io_types": { 00:15:58.203 "abort": true, 00:15:58.203 "compare": true, 00:15:58.203 "compare_and_write": true, 00:15:58.203 "copy": true, 00:15:58.203 "flush": true, 00:15:58.203 "get_zone_info": false, 00:15:58.203 "nvme_admin": true, 00:15:58.203 "nvme_io": true, 00:15:58.203 "nvme_io_md": false, 00:15:58.203 "nvme_iov_md": false, 00:15:58.203 "read": true, 00:15:58.203 "reset": true, 00:15:58.203 "seek_data": false, 00:15:58.203 "seek_hole": false, 00:15:58.203 "unmap": false, 00:15:58.203 "write": true, 00:15:58.203 "write_zeroes": true, 00:15:58.203 "zcopy": false, 00:15:58.203 "zone_append": false, 00:15:58.203 "zone_management": false 00:15:58.203 }, 00:15:58.203 "uuid": "7f7d91d1-9255-45d3-92c3-df171c2d5bcd", 00:15:58.203 "zoned": false 00:15:58.203 } 00:15:58.203 ] 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0z4WxiKVQC 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0z4WxiKVQC 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:58.203 [2024-07-15 18:35:20.746721] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:58.203 [2024-07-15 18:35:20.746860] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0z4WxiKVQC 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:58.203 [2024-07-15 18:35:20.754704] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0z4WxiKVQC 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.203 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:58.203 [2024-07-15 18:35:20.762701] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:58.203 [2024-07-15 18:35:20.762758] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:58.475 nvme0n1 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:58.475 [ 00:15:58.475 { 00:15:58.475 "aliases": [ 00:15:58.475 "7f7d91d1-9255-45d3-92c3-df171c2d5bcd" 00:15:58.475 ], 00:15:58.475 "assigned_rate_limits": { 00:15:58.475 "r_mbytes_per_sec": 0, 00:15:58.475 "rw_ios_per_sec": 0, 00:15:58.475 "rw_mbytes_per_sec": 0, 00:15:58.475 "w_mbytes_per_sec": 0 00:15:58.475 }, 00:15:58.475 "block_size": 512, 00:15:58.475 "claimed": false, 00:15:58.475 "driver_specific": { 00:15:58.475 "mp_policy": "active_passive", 00:15:58.475 "nvme": [ 00:15:58.475 { 00:15:58.475 "ctrlr_data": { 00:15:58.475 "ana_reporting": false, 00:15:58.475 "cntlid": 3, 00:15:58.475 "firmware_revision": "24.09", 00:15:58.475 "model_number": "SPDK bdev Controller", 00:15:58.475 "multi_ctrlr": true, 00:15:58.475 "oacs": { 00:15:58.475 "firmware": 0, 00:15:58.475 "format": 0, 00:15:58.475 "ns_manage": 0, 00:15:58.475 "security": 0 00:15:58.475 }, 00:15:58.475 "serial_number": "00000000000000000000", 00:15:58.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:58.475 "vendor_id": "0x8086" 00:15:58.475 }, 00:15:58.475 "ns_data": { 00:15:58.475 "can_share": true, 00:15:58.475 "id": 1 00:15:58.475 }, 00:15:58.475 "trid": { 00:15:58.475 "adrfam": "IPv4", 00:15:58.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:58.475 "traddr": "10.0.0.2", 00:15:58.475 "trsvcid": "4421", 00:15:58.475 "trtype": "TCP" 00:15:58.475 }, 00:15:58.475 "vs": { 00:15:58.475 "nvme_version": "1.3" 00:15:58.475 } 00:15:58.475 } 00:15:58.475 ] 00:15:58.475 }, 00:15:58.475 "memory_domains": [ 00:15:58.475 { 00:15:58.475 "dma_device_id": "system", 00:15:58.475 "dma_device_type": 1 00:15:58.475 } 00:15:58.475 ], 00:15:58.475 "name": "nvme0n1", 00:15:58.475 "num_blocks": 2097152, 00:15:58.475 "product_name": "NVMe disk", 00:15:58.475 "supported_io_types": { 00:15:58.475 "abort": true, 00:15:58.475 "compare": true, 00:15:58.475 "compare_and_write": true, 00:15:58.475 "copy": true, 00:15:58.475 "flush": true, 00:15:58.475 "get_zone_info": false, 00:15:58.475 "nvme_admin": true, 00:15:58.475 "nvme_io": true, 00:15:58.475 "nvme_io_md": false, 00:15:58.475 "nvme_iov_md": false, 00:15:58.475 "read": true, 00:15:58.475 "reset": true, 00:15:58.475 "seek_data": false, 00:15:58.475 "seek_hole": false, 00:15:58.475 "unmap": false, 00:15:58.475 "write": true, 00:15:58.475 "write_zeroes": true, 00:15:58.475 "zcopy": false, 00:15:58.475 "zone_append": false, 00:15:58.475 "zone_management": false 00:15:58.475 }, 00:15:58.475 "uuid": "7f7d91d1-9255-45d3-92c3-df171c2d5bcd", 00:15:58.475 "zoned": false 00:15:58.475 } 00:15:58.475 ] 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.0z4WxiKVQC 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.475 18:35:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.475 rmmod nvme_tcp 00:15:58.475 rmmod nvme_fabrics 00:15:58.475 rmmod nvme_keyring 00:15:58.475 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.475 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86031 ']' 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86031 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86031 ']' 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86031 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86031 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:58.476 killing process with pid 86031 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86031' 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86031 00:15:58.476 [2024-07-15 18:35:21.049815] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:58.476 [2024-07-15 18:35:21.049846] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:58.476 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86031 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:58.734 00:15:58.734 real 0m2.726s 00:15:58.734 user 0m2.298s 00:15:58.734 sys 0m0.795s 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.734 18:35:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:15:58.734 ************************************ 00:15:58.734 END TEST nvmf_async_init 00:15:58.734 ************************************ 00:15:58.734 18:35:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:58.734 18:35:21 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:15:58.734 18:35:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:58.734 18:35:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.734 18:35:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.993 ************************************ 00:15:58.993 START TEST dma 00:15:58.993 ************************************ 00:15:58.993 18:35:21 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:15:58.993 * Looking for test storage... 00:15:58.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:58.993 18:35:21 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.993 18:35:21 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.993 18:35:21 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.993 18:35:21 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.993 18:35:21 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.993 18:35:21 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.993 18:35:21 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.993 18:35:21 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:15:58.993 18:35:21 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.993 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.994 18:35:21 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.994 18:35:21 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:15:58.994 18:35:21 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:15:58.994 00:15:58.994 real 0m0.162s 00:15:58.994 user 0m0.078s 00:15:58.994 sys 0m0.096s 00:15:58.994 18:35:21 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.994 18:35:21 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:15:58.994 ************************************ 00:15:58.994 END TEST dma 00:15:58.994 ************************************ 00:15:58.994 18:35:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:58.994 18:35:21 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:58.994 18:35:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:58.994 18:35:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.994 18:35:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.994 ************************************ 00:15:58.994 START TEST nvmf_identify 00:15:58.994 ************************************ 00:15:58.994 18:35:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:59.253 * Looking for test storage... 00:15:59.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:59.253 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:59.254 Cannot find device "nvmf_tgt_br" 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:59.254 Cannot find device "nvmf_tgt_br2" 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:59.254 Cannot find device "nvmf_tgt_br" 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:59.254 Cannot find device "nvmf_tgt_br2" 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:59.254 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:59.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:59.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:59.513 18:35:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:59.513 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:59.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:59.772 00:15:59.772 --- 10.0.0.2 ping statistics --- 00:15:59.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.772 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:59.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:59.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:59.772 00:15:59.772 --- 10.0.0.3 ping statistics --- 00:15:59.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.772 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:59.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:59.772 00:15:59.772 --- 10.0.0.1 ping statistics --- 00:15:59.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.772 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:59.772 18:35:22 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86304 00:15:59.773 18:35:22 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:59.773 18:35:22 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:59.773 18:35:22 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86304 00:15:59.773 18:35:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86304 ']' 00:15:59.773 18:35:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.773 18:35:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:59.773 18:35:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.773 18:35:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:59.773 18:35:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:59.773 [2024-07-15 18:35:22.238921] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:15:59.773 [2024-07-15 18:35:22.238993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.773 [2024-07-15 18:35:22.380454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.031 [2024-07-15 18:35:22.477839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.031 [2024-07-15 18:35:22.477895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.031 [2024-07-15 18:35:22.477904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.031 [2024-07-15 18:35:22.477912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.031 [2024-07-15 18:35:22.477919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.031 [2024-07-15 18:35:22.478132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.031 [2024-07-15 18:35:22.478315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.031 [2024-07-15 18:35:22.478929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.031 [2024-07-15 18:35:22.478930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:00.597 [2024-07-15 18:35:23.107219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:00.597 Malloc0 00:16:00.597 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.598 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:00.598 [2024-07-15 18:35:23.208418] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.858 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.858 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:00.858 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.858 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:00.858 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.858 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:00.858 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.858 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:00.858 [ 00:16:00.858 { 00:16:00.858 "allow_any_host": true, 00:16:00.858 "hosts": [], 00:16:00.858 "listen_addresses": [ 00:16:00.858 { 00:16:00.858 "adrfam": "IPv4", 00:16:00.858 "traddr": "10.0.0.2", 00:16:00.858 "trsvcid": "4420", 00:16:00.858 "trtype": "TCP" 00:16:00.858 } 00:16:00.858 ], 00:16:00.858 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:00.858 "subtype": "Discovery" 00:16:00.858 }, 00:16:00.858 { 00:16:00.858 "allow_any_host": true, 00:16:00.858 "hosts": [], 00:16:00.858 "listen_addresses": [ 00:16:00.858 { 00:16:00.858 "adrfam": "IPv4", 00:16:00.858 "traddr": "10.0.0.2", 00:16:00.858 "trsvcid": "4420", 00:16:00.858 "trtype": "TCP" 00:16:00.858 } 00:16:00.858 ], 00:16:00.858 "max_cntlid": 65519, 00:16:00.858 "max_namespaces": 32, 00:16:00.858 "min_cntlid": 1, 00:16:00.858 "model_number": "SPDK bdev Controller", 00:16:00.858 "namespaces": [ 00:16:00.858 { 00:16:00.858 "bdev_name": "Malloc0", 00:16:00.858 "eui64": "ABCDEF0123456789", 00:16:00.858 "name": "Malloc0", 00:16:00.858 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:00.858 "nsid": 1, 00:16:00.858 "uuid": "bb4d23d2-f6ca-4e0e-bf95-90991e4a461a" 00:16:00.858 } 00:16:00.858 ], 00:16:00.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.858 "serial_number": "SPDK00000000000001", 00:16:00.858 "subtype": "NVMe" 00:16:00.858 } 00:16:00.858 ] 00:16:00.858 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.858 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:00.858 [2024-07-15 18:35:23.282179] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:16:00.858 [2024-07-15 18:35:23.282226] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86357 ] 00:16:00.858 [2024-07-15 18:35:23.418420] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:00.858 [2024-07-15 18:35:23.418509] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:00.858 [2024-07-15 18:35:23.418517] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:00.858 [2024-07-15 18:35:23.418534] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:00.858 [2024-07-15 18:35:23.418543] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:00.858 [2024-07-15 18:35:23.418724] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:00.858 [2024-07-15 18:35:23.418783] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f9ca60 0 00:16:00.858 [2024-07-15 18:35:23.432611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:00.858 [2024-07-15 18:35:23.432647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:00.858 [2024-07-15 18:35:23.432655] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:00.858 [2024-07-15 18:35:23.432662] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:00.858 [2024-07-15 18:35:23.432726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.432735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.432743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f9ca60) 00:16:00.858 [2024-07-15 18:35:23.432761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:00.858 [2024-07-15 18:35:23.432798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf840, cid 0, qid 0 00:16:00.858 [2024-07-15 18:35:23.440597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.858 [2024-07-15 18:35:23.440631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.858 [2024-07-15 18:35:23.440639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.440646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf840) on tqpair=0x1f9ca60 00:16:00.858 [2024-07-15 18:35:23.440666] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:00.858 [2024-07-15 18:35:23.440676] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:00.858 [2024-07-15 18:35:23.440685] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:00.858 [2024-07-15 18:35:23.440709] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.440716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.440721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f9ca60) 00:16:00.858 [2024-07-15 18:35:23.440735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.858 [2024-07-15 18:35:23.440773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf840, cid 0, qid 0 00:16:00.858 [2024-07-15 18:35:23.440830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.858 [2024-07-15 18:35:23.440838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.858 [2024-07-15 18:35:23.440844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.440850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf840) on tqpair=0x1f9ca60 00:16:00.858 [2024-07-15 18:35:23.440857] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:00.858 [2024-07-15 18:35:23.440867] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:00.858 [2024-07-15 18:35:23.440877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.440882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.440887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f9ca60) 00:16:00.858 [2024-07-15 18:35:23.440898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.858 [2024-07-15 18:35:23.440923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf840, cid 0, qid 0 00:16:00.858 [2024-07-15 18:35:23.440967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.858 [2024-07-15 18:35:23.440975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.858 [2024-07-15 18:35:23.440981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.440986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf840) on tqpair=0x1f9ca60 00:16:00.858 [2024-07-15 18:35:23.440994] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:00.858 [2024-07-15 18:35:23.441005] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:00.858 [2024-07-15 18:35:23.441015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.441021] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.441027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f9ca60) 00:16:00.858 [2024-07-15 18:35:23.441037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.858 [2024-07-15 18:35:23.441065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf840, cid 0, qid 0 00:16:00.858 [2024-07-15 18:35:23.441107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.858 [2024-07-15 18:35:23.441115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.858 [2024-07-15 18:35:23.441120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.858 [2024-07-15 18:35:23.441125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf840) on tqpair=0x1f9ca60 00:16:00.858 [2024-07-15 18:35:23.441133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:00.859 [2024-07-15 18:35:23.441145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f9ca60) 00:16:00.859 [2024-07-15 18:35:23.441165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.859 [2024-07-15 18:35:23.441189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf840, cid 0, qid 0 00:16:00.859 [2024-07-15 18:35:23.441228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.859 [2024-07-15 18:35:23.441237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.859 [2024-07-15 18:35:23.441242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf840) on tqpair=0x1f9ca60 00:16:00.859 [2024-07-15 18:35:23.441255] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:00.859 [2024-07-15 18:35:23.441262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:00.859 [2024-07-15 18:35:23.441272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:00.859 [2024-07-15 18:35:23.441379] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:00.859 [2024-07-15 18:35:23.441389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:00.859 [2024-07-15 18:35:23.441402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f9ca60) 00:16:00.859 [2024-07-15 18:35:23.441423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.859 [2024-07-15 18:35:23.441453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf840, cid 0, qid 0 00:16:00.859 [2024-07-15 18:35:23.441490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.859 [2024-07-15 18:35:23.441498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.859 [2024-07-15 18:35:23.441504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf840) on tqpair=0x1f9ca60 00:16:00.859 [2024-07-15 18:35:23.441516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:00.859 [2024-07-15 18:35:23.441529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f9ca60) 00:16:00.859 [2024-07-15 18:35:23.441550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.859 [2024-07-15 18:35:23.441592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf840, cid 0, qid 0 00:16:00.859 [2024-07-15 18:35:23.441633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.859 [2024-07-15 18:35:23.441641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.859 [2024-07-15 18:35:23.441647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf840) on tqpair=0x1f9ca60 00:16:00.859 [2024-07-15 18:35:23.441659] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:00.859 [2024-07-15 18:35:23.441666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:00.859 [2024-07-15 18:35:23.441676] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:00.859 [2024-07-15 18:35:23.441691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:00.859 [2024-07-15 18:35:23.441704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f9ca60) 00:16:00.859 [2024-07-15 18:35:23.441719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.859 [2024-07-15 18:35:23.441741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf840, cid 0, qid 0 00:16:00.859 [2024-07-15 18:35:23.441813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:00.859 [2024-07-15 18:35:23.441821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:00.859 [2024-07-15 18:35:23.441827] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441833] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f9ca60): datao=0, datal=4096, cccid=0 00:16:00.859 [2024-07-15 18:35:23.441840] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdf840) on tqpair(0x1f9ca60): expected_datao=0, payload_size=4096 00:16:00.859 [2024-07-15 18:35:23.441847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441857] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441863] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.859 [2024-07-15 18:35:23.441881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.859 [2024-07-15 18:35:23.441886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf840) on tqpair=0x1f9ca60 00:16:00.859 [2024-07-15 18:35:23.441903] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:00.859 [2024-07-15 18:35:23.441910] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:00.859 [2024-07-15 18:35:23.441917] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:00.859 [2024-07-15 18:35:23.441924] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:00.859 [2024-07-15 18:35:23.441931] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:00.859 [2024-07-15 18:35:23.441938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:00.859 [2024-07-15 18:35:23.441949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:00.859 [2024-07-15 18:35:23.441958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.441969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f9ca60) 00:16:00.859 [2024-07-15 18:35:23.441978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:00.859 [2024-07-15 18:35:23.441998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf840, cid 0, qid 0 00:16:00.859 [2024-07-15 18:35:23.442043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.859 [2024-07-15 18:35:23.442051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.859 [2024-07-15 18:35:23.442056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.442062] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf840) on tqpair=0x1f9ca60 00:16:00.859 [2024-07-15 18:35:23.442071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.442076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.442082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f9ca60) 00:16:00.859 [2024-07-15 18:35:23.442090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.859 [2024-07-15 18:35:23.442098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.442103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.442109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f9ca60) 00:16:00.859 [2024-07-15 18:35:23.442116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.859 [2024-07-15 18:35:23.442124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.442130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.442135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f9ca60) 00:16:00.859 [2024-07-15 18:35:23.442142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.859 [2024-07-15 18:35:23.442150] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.859 [2024-07-15 18:35:23.442156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:00.860 [2024-07-15 18:35:23.442168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.860 [2024-07-15 18:35:23.442175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:00.860 [2024-07-15 18:35:23.442192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:00.860 [2024-07-15 18:35:23.442200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f9ca60) 00:16:00.860 [2024-07-15 18:35:23.442214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.860 [2024-07-15 18:35:23.442237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf840, cid 0, qid 0 00:16:00.860 [2024-07-15 18:35:23.442244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdf9c0, cid 1, qid 0 00:16:00.860 [2024-07-15 18:35:23.442251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfb40, cid 2, qid 0 00:16:00.860 [2024-07-15 18:35:23.442257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:00.860 [2024-07-15 18:35:23.442264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfe40, cid 4, qid 0 00:16:00.860 [2024-07-15 18:35:23.442325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.860 [2024-07-15 18:35:23.442333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.860 [2024-07-15 18:35:23.442338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfe40) on tqpair=0x1f9ca60 00:16:00.860 [2024-07-15 18:35:23.442350] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:00.860 [2024-07-15 18:35:23.442362] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:00.860 [2024-07-15 18:35:23.442376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f9ca60) 00:16:00.860 [2024-07-15 18:35:23.442390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.860 [2024-07-15 18:35:23.442410] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfe40, cid 4, qid 0 00:16:00.860 [2024-07-15 18:35:23.442456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:00.860 [2024-07-15 18:35:23.442463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:00.860 [2024-07-15 18:35:23.442469] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442475] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f9ca60): datao=0, datal=4096, cccid=4 00:16:00.860 [2024-07-15 18:35:23.442481] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdfe40) on tqpair(0x1f9ca60): expected_datao=0, payload_size=4096 00:16:00.860 [2024-07-15 18:35:23.442488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442497] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442502] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.860 [2024-07-15 18:35:23.442520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.860 [2024-07-15 18:35:23.442525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfe40) on tqpair=0x1f9ca60 00:16:00.860 [2024-07-15 18:35:23.442546] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:00.860 [2024-07-15 18:35:23.442603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f9ca60) 00:16:00.860 [2024-07-15 18:35:23.442620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.860 [2024-07-15 18:35:23.442630] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442641] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f9ca60) 00:16:00.860 [2024-07-15 18:35:23.442648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.860 [2024-07-15 18:35:23.442677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfe40, cid 4, qid 0 00:16:00.860 [2024-07-15 18:35:23.442684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdffc0, cid 5, qid 0 00:16:00.860 [2024-07-15 18:35:23.442787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:00.860 [2024-07-15 18:35:23.442798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:00.860 [2024-07-15 18:35:23.442804] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442809] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f9ca60): datao=0, datal=1024, cccid=4 00:16:00.860 [2024-07-15 18:35:23.442816] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdfe40) on tqpair(0x1f9ca60): expected_datao=0, payload_size=1024 00:16:00.860 [2024-07-15 18:35:23.442823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442831] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442837] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:00.860 [2024-07-15 18:35:23.442852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:00.860 [2024-07-15 18:35:23.442857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:00.860 [2024-07-15 18:35:23.442863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdffc0) on tqpair=0x1f9ca60 00:16:01.121 [2024-07-15 18:35:23.483621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.121 [2024-07-15 18:35:23.483652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.121 [2024-07-15 18:35:23.483660] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.483667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfe40) on tqpair=0x1f9ca60 00:16:01.121 [2024-07-15 18:35:23.483688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.483695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f9ca60) 00:16:01.121 [2024-07-15 18:35:23.483707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.121 [2024-07-15 18:35:23.483746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfe40, cid 4, qid 0 00:16:01.121 [2024-07-15 18:35:23.483811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:01.121 [2024-07-15 18:35:23.483820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:01.121 [2024-07-15 18:35:23.483826] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.483832] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f9ca60): datao=0, datal=3072, cccid=4 00:16:01.121 [2024-07-15 18:35:23.483839] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdfe40) on tqpair(0x1f9ca60): expected_datao=0, payload_size=3072 00:16:01.121 [2024-07-15 18:35:23.483847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.483855] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.483861] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.483871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.121 [2024-07-15 18:35:23.483879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.121 [2024-07-15 18:35:23.483884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.483889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfe40) on tqpair=0x1f9ca60 00:16:01.121 [2024-07-15 18:35:23.483904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.483910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f9ca60) 00:16:01.121 [2024-07-15 18:35:23.483919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.121 [2024-07-15 18:35:23.483951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfe40, cid 4, qid 0 00:16:01.121 [2024-07-15 18:35:23.484000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:01.121 [2024-07-15 18:35:23.484008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:01.121 [2024-07-15 18:35:23.484014] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.484019] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f9ca60): datao=0, datal=8, cccid=4 00:16:01.121 [2024-07-15 18:35:23.484026] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdfe40) on tqpair(0x1f9ca60): expected_datao=0, payload_size=8 00:16:01.121 [2024-07-15 18:35:23.484032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.484041] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.484047] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.528656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.121 [2024-07-15 18:35:23.528730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.121 [2024-07-15 18:35:23.528747] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.121 [2024-07-15 18:35:23.528764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfe40) on tqpair=0x1f9ca60 00:16:01.121 ===================================================== 00:16:01.121 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:01.121 ===================================================== 00:16:01.121 Controller Capabilities/Features 00:16:01.121 ================================ 00:16:01.121 Vendor ID: 0000 00:16:01.121 Subsystem Vendor ID: 0000 00:16:01.121 Serial Number: .................... 00:16:01.121 Model Number: ........................................ 00:16:01.121 Firmware Version: 24.09 00:16:01.121 Recommended Arb Burst: 0 00:16:01.121 IEEE OUI Identifier: 00 00 00 00:16:01.121 Multi-path I/O 00:16:01.121 May have multiple subsystem ports: No 00:16:01.121 May have multiple controllers: No 00:16:01.121 Associated with SR-IOV VF: No 00:16:01.121 Max Data Transfer Size: 131072 00:16:01.121 Max Number of Namespaces: 0 00:16:01.121 Max Number of I/O Queues: 1024 00:16:01.121 NVMe Specification Version (VS): 1.3 00:16:01.121 NVMe Specification Version (Identify): 1.3 00:16:01.121 Maximum Queue Entries: 128 00:16:01.121 Contiguous Queues Required: Yes 00:16:01.121 Arbitration Mechanisms Supported 00:16:01.121 Weighted Round Robin: Not Supported 00:16:01.121 Vendor Specific: Not Supported 00:16:01.121 Reset Timeout: 15000 ms 00:16:01.121 Doorbell Stride: 4 bytes 00:16:01.121 NVM Subsystem Reset: Not Supported 00:16:01.121 Command Sets Supported 00:16:01.121 NVM Command Set: Supported 00:16:01.121 Boot Partition: Not Supported 00:16:01.121 Memory Page Size Minimum: 4096 bytes 00:16:01.121 Memory Page Size Maximum: 4096 bytes 00:16:01.121 Persistent Memory Region: Not Supported 00:16:01.121 Optional Asynchronous Events Supported 00:16:01.121 Namespace Attribute Notices: Not Supported 00:16:01.121 Firmware Activation Notices: Not Supported 00:16:01.121 ANA Change Notices: Not Supported 00:16:01.121 PLE Aggregate Log Change Notices: Not Supported 00:16:01.121 LBA Status Info Alert Notices: Not Supported 00:16:01.121 EGE Aggregate Log Change Notices: Not Supported 00:16:01.121 Normal NVM Subsystem Shutdown event: Not Supported 00:16:01.121 Zone Descriptor Change Notices: Not Supported 00:16:01.121 Discovery Log Change Notices: Supported 00:16:01.121 Controller Attributes 00:16:01.121 128-bit Host Identifier: Not Supported 00:16:01.121 Non-Operational Permissive Mode: Not Supported 00:16:01.121 NVM Sets: Not Supported 00:16:01.121 Read Recovery Levels: Not Supported 00:16:01.121 Endurance Groups: Not Supported 00:16:01.121 Predictable Latency Mode: Not Supported 00:16:01.121 Traffic Based Keep ALive: Not Supported 00:16:01.121 Namespace Granularity: Not Supported 00:16:01.122 SQ Associations: Not Supported 00:16:01.122 UUID List: Not Supported 00:16:01.122 Multi-Domain Subsystem: Not Supported 00:16:01.122 Fixed Capacity Management: Not Supported 00:16:01.122 Variable Capacity Management: Not Supported 00:16:01.122 Delete Endurance Group: Not Supported 00:16:01.122 Delete NVM Set: Not Supported 00:16:01.122 Extended LBA Formats Supported: Not Supported 00:16:01.122 Flexible Data Placement Supported: Not Supported 00:16:01.122 00:16:01.122 Controller Memory Buffer Support 00:16:01.122 ================================ 00:16:01.122 Supported: No 00:16:01.122 00:16:01.122 Persistent Memory Region Support 00:16:01.122 ================================ 00:16:01.122 Supported: No 00:16:01.122 00:16:01.122 Admin Command Set Attributes 00:16:01.122 ============================ 00:16:01.122 Security Send/Receive: Not Supported 00:16:01.122 Format NVM: Not Supported 00:16:01.122 Firmware Activate/Download: Not Supported 00:16:01.122 Namespace Management: Not Supported 00:16:01.122 Device Self-Test: Not Supported 00:16:01.122 Directives: Not Supported 00:16:01.122 NVMe-MI: Not Supported 00:16:01.122 Virtualization Management: Not Supported 00:16:01.122 Doorbell Buffer Config: Not Supported 00:16:01.122 Get LBA Status Capability: Not Supported 00:16:01.122 Command & Feature Lockdown Capability: Not Supported 00:16:01.122 Abort Command Limit: 1 00:16:01.122 Async Event Request Limit: 4 00:16:01.122 Number of Firmware Slots: N/A 00:16:01.122 Firmware Slot 1 Read-Only: N/A 00:16:01.122 Firmware Activation Without Reset: N/A 00:16:01.122 Multiple Update Detection Support: N/A 00:16:01.122 Firmware Update Granularity: No Information Provided 00:16:01.122 Per-Namespace SMART Log: No 00:16:01.122 Asymmetric Namespace Access Log Page: Not Supported 00:16:01.122 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:01.122 Command Effects Log Page: Not Supported 00:16:01.122 Get Log Page Extended Data: Supported 00:16:01.122 Telemetry Log Pages: Not Supported 00:16:01.122 Persistent Event Log Pages: Not Supported 00:16:01.122 Supported Log Pages Log Page: May Support 00:16:01.122 Commands Supported & Effects Log Page: Not Supported 00:16:01.122 Feature Identifiers & Effects Log Page:May Support 00:16:01.122 NVMe-MI Commands & Effects Log Page: May Support 00:16:01.122 Data Area 4 for Telemetry Log: Not Supported 00:16:01.122 Error Log Page Entries Supported: 128 00:16:01.122 Keep Alive: Not Supported 00:16:01.122 00:16:01.122 NVM Command Set Attributes 00:16:01.122 ========================== 00:16:01.122 Submission Queue Entry Size 00:16:01.122 Max: 1 00:16:01.122 Min: 1 00:16:01.122 Completion Queue Entry Size 00:16:01.122 Max: 1 00:16:01.122 Min: 1 00:16:01.122 Number of Namespaces: 0 00:16:01.122 Compare Command: Not Supported 00:16:01.122 Write Uncorrectable Command: Not Supported 00:16:01.122 Dataset Management Command: Not Supported 00:16:01.122 Write Zeroes Command: Not Supported 00:16:01.122 Set Features Save Field: Not Supported 00:16:01.122 Reservations: Not Supported 00:16:01.122 Timestamp: Not Supported 00:16:01.122 Copy: Not Supported 00:16:01.122 Volatile Write Cache: Not Present 00:16:01.122 Atomic Write Unit (Normal): 1 00:16:01.122 Atomic Write Unit (PFail): 1 00:16:01.122 Atomic Compare & Write Unit: 1 00:16:01.122 Fused Compare & Write: Supported 00:16:01.122 Scatter-Gather List 00:16:01.122 SGL Command Set: Supported 00:16:01.122 SGL Keyed: Supported 00:16:01.122 SGL Bit Bucket Descriptor: Not Supported 00:16:01.122 SGL Metadata Pointer: Not Supported 00:16:01.122 Oversized SGL: Not Supported 00:16:01.122 SGL Metadata Address: Not Supported 00:16:01.122 SGL Offset: Supported 00:16:01.122 Transport SGL Data Block: Not Supported 00:16:01.122 Replay Protected Memory Block: Not Supported 00:16:01.122 00:16:01.122 Firmware Slot Information 00:16:01.122 ========================= 00:16:01.122 Active slot: 0 00:16:01.122 00:16:01.122 00:16:01.122 Error Log 00:16:01.122 ========= 00:16:01.122 00:16:01.122 Active Namespaces 00:16:01.122 ================= 00:16:01.122 Discovery Log Page 00:16:01.122 ================== 00:16:01.122 Generation Counter: 2 00:16:01.122 Number of Records: 2 00:16:01.122 Record Format: 0 00:16:01.122 00:16:01.122 Discovery Log Entry 0 00:16:01.122 ---------------------- 00:16:01.122 Transport Type: 3 (TCP) 00:16:01.122 Address Family: 1 (IPv4) 00:16:01.122 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:01.122 Entry Flags: 00:16:01.122 Duplicate Returned Information: 1 00:16:01.122 Explicit Persistent Connection Support for Discovery: 1 00:16:01.122 Transport Requirements: 00:16:01.122 Secure Channel: Not Required 00:16:01.122 Port ID: 0 (0x0000) 00:16:01.122 Controller ID: 65535 (0xffff) 00:16:01.122 Admin Max SQ Size: 128 00:16:01.122 Transport Service Identifier: 4420 00:16:01.122 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:01.122 Transport Address: 10.0.0.2 00:16:01.122 Discovery Log Entry 1 00:16:01.122 ---------------------- 00:16:01.122 Transport Type: 3 (TCP) 00:16:01.122 Address Family: 1 (IPv4) 00:16:01.122 Subsystem Type: 2 (NVM Subsystem) 00:16:01.122 Entry Flags: 00:16:01.122 Duplicate Returned Information: 0 00:16:01.122 Explicit Persistent Connection Support for Discovery: 0 00:16:01.122 Transport Requirements: 00:16:01.122 Secure Channel: Not Required 00:16:01.122 Port ID: 0 (0x0000) 00:16:01.122 Controller ID: 65535 (0xffff) 00:16:01.122 Admin Max SQ Size: 128 00:16:01.122 Transport Service Identifier: 4420 00:16:01.122 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:01.122 Transport Address: 10.0.0.2 [2024-07-15 18:35:23.529079] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:01.122 [2024-07-15 18:35:23.529121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf840) on tqpair=0x1f9ca60 00:16:01.122 [2024-07-15 18:35:23.529142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.122 [2024-07-15 18:35:23.529161] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdf9c0) on tqpair=0x1f9ca60 00:16:01.122 [2024-07-15 18:35:23.529177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.122 [2024-07-15 18:35:23.529194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfb40) on tqpair=0x1f9ca60 00:16:01.122 [2024-07-15 18:35:23.529210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.122 [2024-07-15 18:35:23.529227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.122 [2024-07-15 18:35:23.529243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.122 [2024-07-15 18:35:23.529272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.122 [2024-07-15 18:35:23.529287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.122 [2024-07-15 18:35:23.529300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.122 [2024-07-15 18:35:23.529326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.122 [2024-07-15 18:35:23.529396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.122 [2024-07-15 18:35:23.529447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.122 [2024-07-15 18:35:23.529467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.122 [2024-07-15 18:35:23.529479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.122 [2024-07-15 18:35:23.529493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.122 [2024-07-15 18:35:23.529513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.122 [2024-07-15 18:35:23.529527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.122 [2024-07-15 18:35:23.529539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.122 [2024-07-15 18:35:23.529560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.122 [2024-07-15 18:35:23.529647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.122 [2024-07-15 18:35:23.529705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.122 [2024-07-15 18:35:23.529724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.122 [2024-07-15 18:35:23.529737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.122 [2024-07-15 18:35:23.529750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.122 [2024-07-15 18:35:23.529765] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:01.122 [2024-07-15 18:35:23.529782] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:01.122 [2024-07-15 18:35:23.529811] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.122 [2024-07-15 18:35:23.529825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.122 [2024-07-15 18:35:23.529838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.122 [2024-07-15 18:35:23.529859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.122 [2024-07-15 18:35:23.529903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.122 [2024-07-15 18:35:23.529949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.122 [2024-07-15 18:35:23.529969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.122 [2024-07-15 18:35:23.529981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.122 [2024-07-15 18:35:23.529994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.122 [2024-07-15 18:35:23.530024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.122 [2024-07-15 18:35:23.530038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.530070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.530114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.530159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.530179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.530191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.530233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.530280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.530323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.530369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.530388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.530401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.530443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.530489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.530533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.530592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.530613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.530625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.530668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.530715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.530760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.530805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.530825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.530837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.530879] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.530905] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.530925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.530969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.531015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.531034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.531046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.531117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.531164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.531209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.531254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.531274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.531286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.531328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.531374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.531418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.531464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.531483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.531496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.531538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.531607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.531652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.531698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.531718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.531730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.531772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.531819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.531862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.531913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.531933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.531945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.531958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.531987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.532001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.532013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.532033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.532077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.532122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.532142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.532154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.532167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.532196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.532210] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.532222] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.532243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.532286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.532331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.532351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.532363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.532377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.532405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.532419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.532431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.532452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.532495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.532540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.532560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.536617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.536635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.123 [2024-07-15 18:35:23.536669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.536683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.536696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f9ca60) 00:16:01.123 [2024-07-15 18:35:23.536718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.123 [2024-07-15 18:35:23.536773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdfcc0, cid 3, qid 0 00:16:01.123 [2024-07-15 18:35:23.536820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.123 [2024-07-15 18:35:23.536840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.123 [2024-07-15 18:35:23.536852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.123 [2024-07-15 18:35:23.536866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdfcc0) on tqpair=0x1f9ca60 00:16:01.124 [2024-07-15 18:35:23.536889] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:16:01.124 00:16:01.124 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:01.124 [2024-07-15 18:35:23.593863] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:16:01.124 [2024-07-15 18:35:23.593908] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86359 ] 00:16:01.124 [2024-07-15 18:35:23.729962] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:01.124 [2024-07-15 18:35:23.730014] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:01.124 [2024-07-15 18:35:23.730019] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:01.124 [2024-07-15 18:35:23.730030] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:01.124 [2024-07-15 18:35:23.730036] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:01.124 [2024-07-15 18:35:23.730139] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:01.124 [2024-07-15 18:35:23.730175] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb9da60 0 00:16:01.409 [2024-07-15 18:35:23.745581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:01.409 [2024-07-15 18:35:23.745599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:01.409 [2024-07-15 18:35:23.745604] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:01.409 [2024-07-15 18:35:23.745607] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:01.410 [2024-07-15 18:35:23.745650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.745656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.745661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9da60) 00:16:01.410 [2024-07-15 18:35:23.745672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:01.410 [2024-07-15 18:35:23.745695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0840, cid 0, qid 0 00:16:01.410 [2024-07-15 18:35:23.753586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.410 [2024-07-15 18:35:23.753603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.410 [2024-07-15 18:35:23.753607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0840) on tqpair=0xb9da60 00:16:01.410 [2024-07-15 18:35:23.753620] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:01.410 [2024-07-15 18:35:23.753628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:01.410 [2024-07-15 18:35:23.753634] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:01.410 [2024-07-15 18:35:23.753650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9da60) 00:16:01.410 [2024-07-15 18:35:23.753666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.410 [2024-07-15 18:35:23.753690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0840, cid 0, qid 0 00:16:01.410 [2024-07-15 18:35:23.753738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.410 [2024-07-15 18:35:23.753744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.410 [2024-07-15 18:35:23.753748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0840) on tqpair=0xb9da60 00:16:01.410 [2024-07-15 18:35:23.753757] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:01.410 [2024-07-15 18:35:23.753764] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:01.410 [2024-07-15 18:35:23.753771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9da60) 00:16:01.410 [2024-07-15 18:35:23.753784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.410 [2024-07-15 18:35:23.753798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0840, cid 0, qid 0 00:16:01.410 [2024-07-15 18:35:23.753840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.410 [2024-07-15 18:35:23.753846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.410 [2024-07-15 18:35:23.753850] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0840) on tqpair=0xb9da60 00:16:01.410 [2024-07-15 18:35:23.753858] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:01.410 [2024-07-15 18:35:23.753866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:01.410 [2024-07-15 18:35:23.753872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9da60) 00:16:01.410 [2024-07-15 18:35:23.753886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.410 [2024-07-15 18:35:23.753899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0840, cid 0, qid 0 00:16:01.410 [2024-07-15 18:35:23.753939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.410 [2024-07-15 18:35:23.753945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.410 [2024-07-15 18:35:23.753948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0840) on tqpair=0xb9da60 00:16:01.410 [2024-07-15 18:35:23.753957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:01.410 [2024-07-15 18:35:23.753965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.753973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9da60) 00:16:01.410 [2024-07-15 18:35:23.753979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.410 [2024-07-15 18:35:23.753992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0840, cid 0, qid 0 00:16:01.410 [2024-07-15 18:35:23.754034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.410 [2024-07-15 18:35:23.754040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.410 [2024-07-15 18:35:23.754043] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0840) on tqpair=0xb9da60 00:16:01.410 [2024-07-15 18:35:23.754051] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:01.410 [2024-07-15 18:35:23.754056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:01.410 [2024-07-15 18:35:23.754063] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:01.410 [2024-07-15 18:35:23.754169] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:01.410 [2024-07-15 18:35:23.754173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:01.410 [2024-07-15 18:35:23.754180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9da60) 00:16:01.410 [2024-07-15 18:35:23.754194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.410 [2024-07-15 18:35:23.754207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0840, cid 0, qid 0 00:16:01.410 [2024-07-15 18:35:23.754252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.410 [2024-07-15 18:35:23.754258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.410 [2024-07-15 18:35:23.754261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0840) on tqpair=0xb9da60 00:16:01.410 [2024-07-15 18:35:23.754270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:01.410 [2024-07-15 18:35:23.754278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9da60) 00:16:01.410 [2024-07-15 18:35:23.754292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.410 [2024-07-15 18:35:23.754305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0840, cid 0, qid 0 00:16:01.410 [2024-07-15 18:35:23.754345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.410 [2024-07-15 18:35:23.754351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.410 [2024-07-15 18:35:23.754354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0840) on tqpair=0xb9da60 00:16:01.410 [2024-07-15 18:35:23.754362] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:01.410 [2024-07-15 18:35:23.754367] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:01.410 [2024-07-15 18:35:23.754374] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:01.410 [2024-07-15 18:35:23.754383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:01.410 [2024-07-15 18:35:23.754392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9da60) 00:16:01.410 [2024-07-15 18:35:23.754402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.410 [2024-07-15 18:35:23.754415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0840, cid 0, qid 0 00:16:01.410 [2024-07-15 18:35:23.754488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:01.410 [2024-07-15 18:35:23.754494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:01.410 [2024-07-15 18:35:23.754497] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754501] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9da60): datao=0, datal=4096, cccid=0 00:16:01.410 [2024-07-15 18:35:23.754506] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe0840) on tqpair(0xb9da60): expected_datao=0, payload_size=4096 00:16:01.410 [2024-07-15 18:35:23.754511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754518] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754522] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.410 [2024-07-15 18:35:23.754536] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.410 [2024-07-15 18:35:23.754539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0840) on tqpair=0xb9da60 00:16:01.410 [2024-07-15 18:35:23.754550] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:01.410 [2024-07-15 18:35:23.754555] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:01.410 [2024-07-15 18:35:23.754560] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:01.410 [2024-07-15 18:35:23.754564] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:01.410 [2024-07-15 18:35:23.754578] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:01.410 [2024-07-15 18:35:23.754584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:01.410 [2024-07-15 18:35:23.754592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:01.410 [2024-07-15 18:35:23.754598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.410 [2024-07-15 18:35:23.754602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9da60) 00:16:01.411 [2024-07-15 18:35:23.754612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:01.411 [2024-07-15 18:35:23.754626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0840, cid 0, qid 0 00:16:01.411 [2024-07-15 18:35:23.754669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.411 [2024-07-15 18:35:23.754675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.411 [2024-07-15 18:35:23.754679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0840) on tqpair=0xb9da60 00:16:01.411 [2024-07-15 18:35:23.754689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9da60) 00:16:01.411 [2024-07-15 18:35:23.754702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.411 [2024-07-15 18:35:23.754708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb9da60) 00:16:01.411 [2024-07-15 18:35:23.754720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.411 [2024-07-15 18:35:23.754726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb9da60) 00:16:01.411 [2024-07-15 18:35:23.754739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.411 [2024-07-15 18:35:23.754744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.411 [2024-07-15 18:35:23.754757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.411 [2024-07-15 18:35:23.754762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.754773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.754779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9da60) 00:16:01.411 [2024-07-15 18:35:23.754790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.411 [2024-07-15 18:35:23.754805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0840, cid 0, qid 0 00:16:01.411 [2024-07-15 18:35:23.754810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe09c0, cid 1, qid 0 00:16:01.411 [2024-07-15 18:35:23.754815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0b40, cid 2, qid 0 00:16:01.411 [2024-07-15 18:35:23.754819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.411 [2024-07-15 18:35:23.754824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0e40, cid 4, qid 0 00:16:01.411 [2024-07-15 18:35:23.754895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.411 [2024-07-15 18:35:23.754900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.411 [2024-07-15 18:35:23.754904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0e40) on tqpair=0xb9da60 00:16:01.411 [2024-07-15 18:35:23.754912] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:01.411 [2024-07-15 18:35:23.754920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.754928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.754934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.754940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.754948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9da60) 00:16:01.411 [2024-07-15 18:35:23.754954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:01.411 [2024-07-15 18:35:23.754968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0e40, cid 4, qid 0 00:16:01.411 [2024-07-15 18:35:23.755018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.411 [2024-07-15 18:35:23.755023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.411 [2024-07-15 18:35:23.755027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0e40) on tqpair=0xb9da60 00:16:01.411 [2024-07-15 18:35:23.755086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.755094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.755102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9da60) 00:16:01.411 [2024-07-15 18:35:23.755111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.411 [2024-07-15 18:35:23.755125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0e40, cid 4, qid 0 00:16:01.411 [2024-07-15 18:35:23.755178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:01.411 [2024-07-15 18:35:23.755183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:01.411 [2024-07-15 18:35:23.755187] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755191] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9da60): datao=0, datal=4096, cccid=4 00:16:01.411 [2024-07-15 18:35:23.755196] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe0e40) on tqpair(0xb9da60): expected_datao=0, payload_size=4096 00:16:01.411 [2024-07-15 18:35:23.755200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755206] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755210] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.411 [2024-07-15 18:35:23.755223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.411 [2024-07-15 18:35:23.755227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0e40) on tqpair=0xb9da60 00:16:01.411 [2024-07-15 18:35:23.755241] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:01.411 [2024-07-15 18:35:23.755250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.755258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.755264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9da60) 00:16:01.411 [2024-07-15 18:35:23.755274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.411 [2024-07-15 18:35:23.755288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0e40, cid 4, qid 0 00:16:01.411 [2024-07-15 18:35:23.755346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:01.411 [2024-07-15 18:35:23.755351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:01.411 [2024-07-15 18:35:23.755355] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755359] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9da60): datao=0, datal=4096, cccid=4 00:16:01.411 [2024-07-15 18:35:23.755363] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe0e40) on tqpair(0xb9da60): expected_datao=0, payload_size=4096 00:16:01.411 [2024-07-15 18:35:23.755368] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755374] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755377] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.411 [2024-07-15 18:35:23.755390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.411 [2024-07-15 18:35:23.755394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0e40) on tqpair=0xb9da60 00:16:01.411 [2024-07-15 18:35:23.755409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.755417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.755424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9da60) 00:16:01.411 [2024-07-15 18:35:23.755434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.411 [2024-07-15 18:35:23.755447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0e40, cid 4, qid 0 00:16:01.411 [2024-07-15 18:35:23.755493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:01.411 [2024-07-15 18:35:23.755499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:01.411 [2024-07-15 18:35:23.755502] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755506] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9da60): datao=0, datal=4096, cccid=4 00:16:01.411 [2024-07-15 18:35:23.755511] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe0e40) on tqpair(0xb9da60): expected_datao=0, payload_size=4096 00:16:01.411 [2024-07-15 18:35:23.755515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755521] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755524] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.411 [2024-07-15 18:35:23.755537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.411 [2024-07-15 18:35:23.755541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.411 [2024-07-15 18:35:23.755545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0e40) on tqpair=0xb9da60 00:16:01.411 [2024-07-15 18:35:23.755551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:01.411 [2024-07-15 18:35:23.755559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:01.412 [2024-07-15 18:35:23.755575] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:01.412 [2024-07-15 18:35:23.755582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:01.412 [2024-07-15 18:35:23.755587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:01.412 [2024-07-15 18:35:23.755592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:01.412 [2024-07-15 18:35:23.755597] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:01.412 [2024-07-15 18:35:23.755602] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:01.412 [2024-07-15 18:35:23.755608] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:01.412 [2024-07-15 18:35:23.755622] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.755626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9da60) 00:16:01.412 [2024-07-15 18:35:23.755631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.412 [2024-07-15 18:35:23.755638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.755642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.755646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb9da60) 00:16:01.412 [2024-07-15 18:35:23.755651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.412 [2024-07-15 18:35:23.755670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0e40, cid 4, qid 0 00:16:01.412 [2024-07-15 18:35:23.755676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0fc0, cid 5, qid 0 00:16:01.412 [2024-07-15 18:35:23.755729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.412 [2024-07-15 18:35:23.755734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.412 [2024-07-15 18:35:23.755738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.755742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0e40) on tqpair=0xb9da60 00:16:01.412 [2024-07-15 18:35:23.755748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.412 [2024-07-15 18:35:23.755753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.412 [2024-07-15 18:35:23.755757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.755761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0fc0) on tqpair=0xb9da60 00:16:01.412 [2024-07-15 18:35:23.755769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.755773] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb9da60) 00:16:01.412 [2024-07-15 18:35:23.755779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.412 [2024-07-15 18:35:23.755792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0fc0, cid 5, qid 0 00:16:01.412 [2024-07-15 18:35:23.755837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.412 [2024-07-15 18:35:23.755843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.412 [2024-07-15 18:35:23.755846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.755850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0fc0) on tqpair=0xb9da60 00:16:01.412 [2024-07-15 18:35:23.755859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.755863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb9da60) 00:16:01.412 [2024-07-15 18:35:23.755869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.412 [2024-07-15 18:35:23.755881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0fc0, cid 5, qid 0 00:16:01.412 [2024-07-15 18:35:23.755923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.412 [2024-07-15 18:35:23.755928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.412 [2024-07-15 18:35:23.755932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.755935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0fc0) on tqpair=0xb9da60 00:16:01.412 [2024-07-15 18:35:23.755944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.755948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb9da60) 00:16:01.412 [2024-07-15 18:35:23.755954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.412 [2024-07-15 18:35:23.755967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0fc0, cid 5, qid 0 00:16:01.412 [2024-07-15 18:35:23.756004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.412 [2024-07-15 18:35:23.756010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.412 [2024-07-15 18:35:23.756013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0fc0) on tqpair=0xb9da60 00:16:01.412 [2024-07-15 18:35:23.756031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb9da60) 00:16:01.412 [2024-07-15 18:35:23.756041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.412 [2024-07-15 18:35:23.756048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9da60) 00:16:01.412 [2024-07-15 18:35:23.756057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.412 [2024-07-15 18:35:23.756064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb9da60) 00:16:01.412 [2024-07-15 18:35:23.756074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.412 [2024-07-15 18:35:23.756083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb9da60) 00:16:01.412 [2024-07-15 18:35:23.756093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.412 [2024-07-15 18:35:23.756107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0fc0, cid 5, qid 0 00:16:01.412 [2024-07-15 18:35:23.756112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0e40, cid 4, qid 0 00:16:01.412 [2024-07-15 18:35:23.756117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe1140, cid 6, qid 0 00:16:01.412 [2024-07-15 18:35:23.756121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe12c0, cid 7, qid 0 00:16:01.412 [2024-07-15 18:35:23.756224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:01.412 [2024-07-15 18:35:23.756230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:01.412 [2024-07-15 18:35:23.756234] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756237] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9da60): datao=0, datal=8192, cccid=5 00:16:01.412 [2024-07-15 18:35:23.756242] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe0fc0) on tqpair(0xb9da60): expected_datao=0, payload_size=8192 00:16:01.412 [2024-07-15 18:35:23.756246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756260] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756264] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:01.412 [2024-07-15 18:35:23.756274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:01.412 [2024-07-15 18:35:23.756278] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756282] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9da60): datao=0, datal=512, cccid=4 00:16:01.412 [2024-07-15 18:35:23.756286] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe0e40) on tqpair(0xb9da60): expected_datao=0, payload_size=512 00:16:01.412 [2024-07-15 18:35:23.756291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756297] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756300] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:01.412 [2024-07-15 18:35:23.756311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:01.412 [2024-07-15 18:35:23.756314] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756318] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9da60): datao=0, datal=512, cccid=6 00:16:01.412 [2024-07-15 18:35:23.756322] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe1140) on tqpair(0xb9da60): expected_datao=0, payload_size=512 00:16:01.412 [2024-07-15 18:35:23.756327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756332] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756336] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:01.412 [2024-07-15 18:35:23.756346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:01.412 [2024-07-15 18:35:23.756350] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756353] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9da60): datao=0, datal=4096, cccid=7 00:16:01.412 [2024-07-15 18:35:23.756358] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbe12c0) on tqpair(0xb9da60): expected_datao=0, payload_size=4096 00:16:01.412 [2024-07-15 18:35:23.756362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756369] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756372] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.412 [2024-07-15 18:35:23.756385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.412 [2024-07-15 18:35:23.756388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0fc0) on tqpair=0xb9da60 00:16:01.412 [2024-07-15 18:35:23.756407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.412 [2024-07-15 18:35:23.756413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.412 [2024-07-15 18:35:23.756416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0e40) on tqpair=0xb9da60 00:16:01.412 [2024-07-15 18:35:23.756432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.412 [2024-07-15 18:35:23.756437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.412 [2024-07-15 18:35:23.756441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.412 [2024-07-15 18:35:23.756444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe1140) on tqpair=0xb9da60 00:16:01.412 ===================================================== 00:16:01.412 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:01.412 ===================================================== 00:16:01.412 Controller Capabilities/Features 00:16:01.412 ================================ 00:16:01.413 Vendor ID: 8086 00:16:01.413 Subsystem Vendor ID: 8086 00:16:01.413 Serial Number: SPDK00000000000001 00:16:01.413 Model Number: SPDK bdev Controller 00:16:01.413 Firmware Version: 24.09 00:16:01.413 Recommended Arb Burst: 6 00:16:01.413 IEEE OUI Identifier: e4 d2 5c 00:16:01.413 Multi-path I/O 00:16:01.413 May have multiple subsystem ports: Yes 00:16:01.413 May have multiple controllers: Yes 00:16:01.413 Associated with SR-IOV VF: No 00:16:01.413 Max Data Transfer Size: 131072 00:16:01.413 Max Number of Namespaces: 32 00:16:01.413 Max Number of I/O Queues: 127 00:16:01.413 NVMe Specification Version (VS): 1.3 00:16:01.413 NVMe Specification Version (Identify): 1.3 00:16:01.413 Maximum Queue Entries: 128 00:16:01.413 Contiguous Queues Required: Yes 00:16:01.413 Arbitration Mechanisms Supported 00:16:01.413 Weighted Round Robin: Not Supported 00:16:01.413 Vendor Specific: Not Supported 00:16:01.413 Reset Timeout: 15000 ms 00:16:01.413 Doorbell Stride: 4 bytes 00:16:01.413 NVM Subsystem Reset: Not Supported 00:16:01.413 Command Sets Supported 00:16:01.413 NVM Command Set: Supported 00:16:01.413 Boot Partition: Not Supported 00:16:01.413 Memory Page Size Minimum: 4096 bytes 00:16:01.413 Memory Page Size Maximum: 4096 bytes 00:16:01.413 Persistent Memory Region: Not Supported 00:16:01.413 Optional Asynchronous Events Supported 00:16:01.413 Namespace Attribute Notices: Supported 00:16:01.413 Firmware Activation Notices: Not Supported 00:16:01.413 ANA Change Notices: Not Supported 00:16:01.413 PLE Aggregate Log Change Notices: Not Supported 00:16:01.413 LBA Status Info Alert Notices: Not Supported 00:16:01.413 EGE Aggregate Log Change Notices: Not Supported 00:16:01.413 Normal NVM Subsystem Shutdown event: Not Supported 00:16:01.413 Zone Descriptor Change Notices: Not Supported 00:16:01.413 Discovery Log Change Notices: Not Supported 00:16:01.413 Controller Attributes 00:16:01.413 128-bit Host Identifier: Supported 00:16:01.413 Non-Operational Permissive Mode: Not Supported 00:16:01.413 NVM Sets: Not Supported 00:16:01.413 Read Recovery Levels: Not Supported 00:16:01.413 Endurance Groups: Not Supported 00:16:01.413 Predictable Latency Mode: Not Supported 00:16:01.413 Traffic Based Keep ALive: Not Supported 00:16:01.413 Namespace Granularity: Not Supported 00:16:01.413 SQ Associations: Not Supported 00:16:01.413 UUID List: Not Supported 00:16:01.413 Multi-Domain Subsystem: Not Supported 00:16:01.413 Fixed Capacity Management: Not Supported 00:16:01.413 Variable Capacity Management: Not Supported 00:16:01.413 Delete Endurance Group: Not Supported 00:16:01.413 Delete NVM Set: Not Supported 00:16:01.413 Extended LBA Formats Supported: Not Supported 00:16:01.413 Flexible Data Placement Supported: Not Supported 00:16:01.413 00:16:01.413 Controller Memory Buffer Support 00:16:01.413 ================================ 00:16:01.413 Supported: No 00:16:01.413 00:16:01.413 Persistent Memory Region Support 00:16:01.413 ================================ 00:16:01.413 Supported: No 00:16:01.413 00:16:01.413 Admin Command Set Attributes 00:16:01.413 ============================ 00:16:01.413 Security Send/Receive: Not Supported 00:16:01.413 Format NVM: Not Supported 00:16:01.413 Firmware Activate/Download: Not Supported 00:16:01.413 Namespace Management: Not Supported 00:16:01.413 Device Self-Test: Not Supported 00:16:01.413 Directives: Not Supported 00:16:01.413 NVMe-MI: Not Supported 00:16:01.413 Virtualization Management: Not Supported 00:16:01.413 Doorbell Buffer Config: Not Supported 00:16:01.413 Get LBA Status Capability: Not Supported 00:16:01.413 Command & Feature Lockdown Capability: Not Supported 00:16:01.413 Abort Command Limit: 4 00:16:01.413 Async Event Request Limit: 4 00:16:01.413 Number of Firmware Slots: N/A 00:16:01.413 Firmware Slot 1 Read-Only: N/A 00:16:01.413 Firmware Activation Without Reset: N/A 00:16:01.413 Multiple Update Detection Support: N/A 00:16:01.413 Firmware Update Granularity: No Information Provided 00:16:01.413 Per-Namespace SMART Log: No 00:16:01.413 Asymmetric Namespace Access Log Page: Not Supported 00:16:01.413 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:01.413 Command Effects Log Page: Supported 00:16:01.413 Get Log Page Extended Data: Supported 00:16:01.413 Telemetry Log Pages: Not Supported 00:16:01.413 Persistent Event Log Pages: Not Supported 00:16:01.413 Supported Log Pages Log Page: May Support 00:16:01.413 Commands Supported & Effects Log Page: Not Supported 00:16:01.413 Feature Identifiers & Effects Log Page:May Support 00:16:01.413 NVMe-MI Commands & Effects Log Page: May Support 00:16:01.413 Data Area 4 for Telemetry Log: Not Supported 00:16:01.413 Error Log Page Entries Supported: 128 00:16:01.413 Keep Alive: Supported 00:16:01.413 Keep Alive Granularity: 10000 ms 00:16:01.413 00:16:01.413 NVM Command Set Attributes 00:16:01.413 ========================== 00:16:01.413 Submission Queue Entry Size 00:16:01.413 Max: 64 00:16:01.413 Min: 64 00:16:01.413 Completion Queue Entry Size 00:16:01.413 Max: 16 00:16:01.413 Min: 16 00:16:01.413 Number of Namespaces: 32 00:16:01.413 Compare Command: Supported 00:16:01.413 Write Uncorrectable Command: Not Supported 00:16:01.413 Dataset Management Command: Supported 00:16:01.413 Write Zeroes Command: Supported 00:16:01.413 Set Features Save Field: Not Supported 00:16:01.413 Reservations: Supported 00:16:01.413 Timestamp: Not Supported 00:16:01.413 Copy: Supported 00:16:01.413 Volatile Write Cache: Present 00:16:01.413 Atomic Write Unit (Normal): 1 00:16:01.413 Atomic Write Unit (PFail): 1 00:16:01.413 Atomic Compare & Write Unit: 1 00:16:01.413 Fused Compare & Write: Supported 00:16:01.413 Scatter-Gather List 00:16:01.413 SGL Command Set: Supported 00:16:01.413 SGL Keyed: Supported 00:16:01.413 SGL Bit Bucket Descriptor: Not Supported 00:16:01.413 SGL Metadata Pointer: Not Supported 00:16:01.413 Oversized SGL: Not Supported 00:16:01.413 SGL Metadata Address: Not Supported 00:16:01.413 SGL Offset: Supported 00:16:01.413 Transport SGL Data Block: Not Supported 00:16:01.413 Replay Protected Memory Block: Not Supported 00:16:01.413 00:16:01.413 Firmware Slot Information 00:16:01.413 ========================= 00:16:01.413 Active slot: 1 00:16:01.413 Slot 1 Firmware Revision: 24.09 00:16:01.413 00:16:01.413 00:16:01.413 Commands Supported and Effects 00:16:01.413 ============================== 00:16:01.413 Admin Commands 00:16:01.413 -------------- 00:16:01.413 Get Log Page (02h): Supported 00:16:01.413 Identify (06h): Supported 00:16:01.413 Abort (08h): Supported 00:16:01.413 Set Features (09h): Supported 00:16:01.413 Get Features (0Ah): Supported 00:16:01.413 Asynchronous Event Request (0Ch): Supported 00:16:01.413 Keep Alive (18h): Supported 00:16:01.413 I/O Commands 00:16:01.413 ------------ 00:16:01.413 Flush (00h): Supported LBA-Change 00:16:01.413 Write (01h): Supported LBA-Change 00:16:01.413 Read (02h): Supported 00:16:01.413 Compare (05h): Supported 00:16:01.413 Write Zeroes (08h): Supported LBA-Change 00:16:01.413 Dataset Management (09h): Supported LBA-Change 00:16:01.413 Copy (19h): Supported LBA-Change 00:16:01.413 00:16:01.413 Error Log 00:16:01.413 ========= 00:16:01.413 00:16:01.413 Arbitration 00:16:01.413 =========== 00:16:01.413 Arbitration Burst: 1 00:16:01.413 00:16:01.413 Power Management 00:16:01.413 ================ 00:16:01.413 Number of Power States: 1 00:16:01.413 Current Power State: Power State #0 00:16:01.413 Power State #0: 00:16:01.413 Max Power: 0.00 W 00:16:01.413 Non-Operational State: Operational 00:16:01.413 Entry Latency: Not Reported 00:16:01.413 Exit Latency: Not Reported 00:16:01.413 Relative Read Throughput: 0 00:16:01.413 Relative Read Latency: 0 00:16:01.413 Relative Write Throughput: 0 00:16:01.413 Relative Write Latency: 0 00:16:01.413 Idle Power: Not Reported 00:16:01.413 Active Power: Not Reported 00:16:01.413 Non-Operational Permissive Mode: Not Supported 00:16:01.413 00:16:01.413 Health Information 00:16:01.413 ================== 00:16:01.413 Critical Warnings: 00:16:01.413 Available Spare Space: OK 00:16:01.413 Temperature: OK 00:16:01.413 Device Reliability: OK 00:16:01.413 Read Only: No 00:16:01.413 Volatile Memory Backup: OK 00:16:01.413 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:01.413 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:01.413 Available Spare: 0% 00:16:01.413 Available Spare Threshold: 0% 00:16:01.413 Life Percentage Used:[2024-07-15 18:35:23.756451] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.413 [2024-07-15 18:35:23.756457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.413 [2024-07-15 18:35:23.756460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.413 [2024-07-15 18:35:23.756464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe12c0) on tqpair=0xb9da60 00:16:01.413 [2024-07-15 18:35:23.756551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.413 [2024-07-15 18:35:23.756556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb9da60) 00:16:01.413 [2024-07-15 18:35:23.756562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.413 [2024-07-15 18:35:23.756600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe12c0, cid 7, qid 0 00:16:01.413 [2024-07-15 18:35:23.756655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.414 [2024-07-15 18:35:23.756661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.414 [2024-07-15 18:35:23.756665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.756668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe12c0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.756701] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:01.414 [2024-07-15 18:35:23.756710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0840) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.756716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.414 [2024-07-15 18:35:23.756722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe09c0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.756727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.414 [2024-07-15 18:35:23.756732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0b40) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.756736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.414 [2024-07-15 18:35:23.756741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.756746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.414 [2024-07-15 18:35:23.756753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.756757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.756761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.414 [2024-07-15 18:35:23.756767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.414 [2024-07-15 18:35:23.756782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.414 [2024-07-15 18:35:23.756819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.414 [2024-07-15 18:35:23.756825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.414 [2024-07-15 18:35:23.756829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.756832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.756838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.756842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.756846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.414 [2024-07-15 18:35:23.756851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.414 [2024-07-15 18:35:23.756867] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.414 [2024-07-15 18:35:23.756922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.414 [2024-07-15 18:35:23.756927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.414 [2024-07-15 18:35:23.756931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.756935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.756939] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:01.414 [2024-07-15 18:35:23.756944] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:01.414 [2024-07-15 18:35:23.756952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.756956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.756960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.414 [2024-07-15 18:35:23.756965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.414 [2024-07-15 18:35:23.756978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.414 [2024-07-15 18:35:23.757021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.414 [2024-07-15 18:35:23.757027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.414 [2024-07-15 18:35:23.757030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.757042] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.414 [2024-07-15 18:35:23.757056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.414 [2024-07-15 18:35:23.757069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.414 [2024-07-15 18:35:23.757106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.414 [2024-07-15 18:35:23.757112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.414 [2024-07-15 18:35:23.757115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.757127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.414 [2024-07-15 18:35:23.757141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.414 [2024-07-15 18:35:23.757153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.414 [2024-07-15 18:35:23.757195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.414 [2024-07-15 18:35:23.757201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.414 [2024-07-15 18:35:23.757205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.757217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.414 [2024-07-15 18:35:23.757230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.414 [2024-07-15 18:35:23.757243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.414 [2024-07-15 18:35:23.757282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.414 [2024-07-15 18:35:23.757288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.414 [2024-07-15 18:35:23.757291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.757303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757311] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.414 [2024-07-15 18:35:23.757316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.414 [2024-07-15 18:35:23.757329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.414 [2024-07-15 18:35:23.757369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.414 [2024-07-15 18:35:23.757374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.414 [2024-07-15 18:35:23.757378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.757390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.414 [2024-07-15 18:35:23.757403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.414 [2024-07-15 18:35:23.757416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.414 [2024-07-15 18:35:23.757455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.414 [2024-07-15 18:35:23.757461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.414 [2024-07-15 18:35:23.757465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.414 [2024-07-15 18:35:23.757477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.414 [2024-07-15 18:35:23.757484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.414 [2024-07-15 18:35:23.757490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.415 [2024-07-15 18:35:23.757503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.415 [2024-07-15 18:35:23.757547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.415 [2024-07-15 18:35:23.757553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.415 [2024-07-15 18:35:23.757557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.415 [2024-07-15 18:35:23.757561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.415 [2024-07-15 18:35:23.761581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:01.415 [2024-07-15 18:35:23.761592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:01.415 [2024-07-15 18:35:23.761596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9da60) 00:16:01.415 [2024-07-15 18:35:23.761602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.415 [2024-07-15 18:35:23.761621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbe0cc0, cid 3, qid 0 00:16:01.415 [2024-07-15 18:35:23.761664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:01.415 [2024-07-15 18:35:23.761670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:01.415 [2024-07-15 18:35:23.761673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:01.415 [2024-07-15 18:35:23.761677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbe0cc0) on tqpair=0xb9da60 00:16:01.415 [2024-07-15 18:35:23.761684] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:16:01.415 0% 00:16:01.415 Data Units Read: 0 00:16:01.415 Data Units Written: 0 00:16:01.415 Host Read Commands: 0 00:16:01.415 Host Write Commands: 0 00:16:01.415 Controller Busy Time: 0 minutes 00:16:01.415 Power Cycles: 0 00:16:01.415 Power On Hours: 0 hours 00:16:01.415 Unsafe Shutdowns: 0 00:16:01.415 Unrecoverable Media Errors: 0 00:16:01.415 Lifetime Error Log Entries: 0 00:16:01.415 Warning Temperature Time: 0 minutes 00:16:01.415 Critical Temperature Time: 0 minutes 00:16:01.415 00:16:01.415 Number of Queues 00:16:01.415 ================ 00:16:01.415 Number of I/O Submission Queues: 127 00:16:01.415 Number of I/O Completion Queues: 127 00:16:01.415 00:16:01.415 Active Namespaces 00:16:01.415 ================= 00:16:01.415 Namespace ID:1 00:16:01.415 Error Recovery Timeout: Unlimited 00:16:01.415 Command Set Identifier: NVM (00h) 00:16:01.415 Deallocate: Supported 00:16:01.415 Deallocated/Unwritten Error: Not Supported 00:16:01.415 Deallocated Read Value: Unknown 00:16:01.415 Deallocate in Write Zeroes: Not Supported 00:16:01.415 Deallocated Guard Field: 0xFFFF 00:16:01.415 Flush: Supported 00:16:01.415 Reservation: Supported 00:16:01.415 Namespace Sharing Capabilities: Multiple Controllers 00:16:01.415 Size (in LBAs): 131072 (0GiB) 00:16:01.415 Capacity (in LBAs): 131072 (0GiB) 00:16:01.415 Utilization (in LBAs): 131072 (0GiB) 00:16:01.415 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:01.415 EUI64: ABCDEF0123456789 00:16:01.415 UUID: bb4d23d2-f6ca-4e0e-bf95-90991e4a461a 00:16:01.415 Thin Provisioning: Not Supported 00:16:01.415 Per-NS Atomic Units: Yes 00:16:01.415 Atomic Boundary Size (Normal): 0 00:16:01.415 Atomic Boundary Size (PFail): 0 00:16:01.415 Atomic Boundary Offset: 0 00:16:01.415 Maximum Single Source Range Length: 65535 00:16:01.415 Maximum Copy Length: 65535 00:16:01.415 Maximum Source Range Count: 1 00:16:01.415 NGUID/EUI64 Never Reused: No 00:16:01.415 Namespace Write Protected: No 00:16:01.415 Number of LBA Formats: 1 00:16:01.415 Current LBA Format: LBA Format #00 00:16:01.415 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:01.415 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.415 rmmod nvme_tcp 00:16:01.415 rmmod nvme_fabrics 00:16:01.415 rmmod nvme_keyring 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86304 ']' 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86304 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86304 ']' 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86304 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86304 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.415 killing process with pid 86304 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86304' 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86304 00:16:01.415 18:35:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86304 00:16:01.673 18:35:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.673 18:35:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.673 18:35:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.673 18:35:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.674 18:35:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.674 18:35:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.674 18:35:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.674 18:35:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.674 18:35:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:01.674 00:16:01.674 real 0m2.625s 00:16:01.674 user 0m6.829s 00:16:01.674 sys 0m0.777s 00:16:01.674 18:35:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.674 18:35:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:01.674 ************************************ 00:16:01.674 END TEST nvmf_identify 00:16:01.674 ************************************ 00:16:01.674 18:35:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:01.674 18:35:24 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:01.674 18:35:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:01.674 18:35:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.674 18:35:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.674 ************************************ 00:16:01.674 START TEST nvmf_perf 00:16:01.674 ************************************ 00:16:01.674 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:01.932 * Looking for test storage... 00:16:01.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:01.932 Cannot find device "nvmf_tgt_br" 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.932 Cannot find device "nvmf_tgt_br2" 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:01.932 Cannot find device "nvmf_tgt_br" 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:01.932 Cannot find device "nvmf_tgt_br2" 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:01.932 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:02.190 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:02.191 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:02.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:02.449 00:16:02.449 --- 10.0.0.2 ping statistics --- 00:16:02.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.449 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:02.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:16:02.449 00:16:02.449 --- 10.0.0.3 ping statistics --- 00:16:02.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.449 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:16:02.449 00:16:02.449 --- 10.0.0.1 ping statistics --- 00:16:02.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.449 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86523 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86523 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 86523 ']' 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.449 18:35:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:02.449 [2024-07-15 18:35:24.946182] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:16:02.449 [2024-07-15 18:35:24.946250] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.707 [2024-07-15 18:35:25.088713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:02.707 [2024-07-15 18:35:25.180637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.707 [2024-07-15 18:35:25.180689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.707 [2024-07-15 18:35:25.180698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.707 [2024-07-15 18:35:25.180706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.707 [2024-07-15 18:35:25.180713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.707 [2024-07-15 18:35:25.181684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.707 [2024-07-15 18:35:25.181860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.707 [2024-07-15 18:35:25.181887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:02.707 [2024-07-15 18:35:25.181893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.271 18:35:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.271 18:35:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:16:03.271 18:35:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:03.271 18:35:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:03.271 18:35:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:03.271 18:35:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.271 18:35:25 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:03.271 18:35:25 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:03.834 18:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:03.834 18:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:03.834 18:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:03.834 18:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.104 18:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:04.104 18:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:04.104 18:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:04.104 18:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:04.104 18:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:04.365 [2024-07-15 18:35:26.815716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.365 18:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:04.622 18:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:04.622 18:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:04.880 18:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:04.880 18:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:04.880 18:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.137 [2024-07-15 18:35:27.615586] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.137 18:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:05.394 18:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:05.394 18:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:05.394 18:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:05.394 18:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:06.818 Initializing NVMe Controllers 00:16:06.818 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:06.818 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:06.818 Initialization complete. Launching workers. 00:16:06.818 ======================================================== 00:16:06.818 Latency(us) 00:16:06.818 Device Information : IOPS MiB/s Average min max 00:16:06.818 PCIE (0000:00:10.0) NSID 1 from core 0: 19290.31 75.35 1658.38 226.83 7661.22 00:16:06.818 ======================================================== 00:16:06.818 Total : 19290.31 75.35 1658.38 226.83 7661.22 00:16:06.818 00:16:06.818 18:35:28 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:07.749 Initializing NVMe Controllers 00:16:07.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:07.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:07.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:07.749 Initialization complete. Launching workers. 00:16:07.749 ======================================================== 00:16:07.749 Latency(us) 00:16:07.749 Device Information : IOPS MiB/s Average min max 00:16:07.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4952.88 19.35 201.68 80.35 7082.13 00:16:07.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.92 0.48 8133.63 5023.61 12026.03 00:16:07.749 ======================================================== 00:16:07.749 Total : 5076.81 19.83 395.29 80.35 12026.03 00:16:07.749 00:16:07.749 18:35:30 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:09.116 Initializing NVMe Controllers 00:16:09.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:09.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:09.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:09.116 Initialization complete. Launching workers. 00:16:09.116 ======================================================== 00:16:09.116 Latency(us) 00:16:09.116 Device Information : IOPS MiB/s Average min max 00:16:09.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11416.75 44.60 2803.04 583.13 6299.28 00:16:09.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2673.03 10.44 12100.33 5734.63 20237.78 00:16:09.116 ======================================================== 00:16:09.116 Total : 14089.78 55.04 4566.87 583.13 20237.78 00:16:09.116 00:16:09.116 18:35:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:09.116 18:35:31 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:11.637 Initializing NVMe Controllers 00:16:11.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:11.637 Controller IO queue size 128, less than required. 00:16:11.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:11.637 Controller IO queue size 128, less than required. 00:16:11.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:11.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:11.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:11.637 Initialization complete. Launching workers. 00:16:11.637 ======================================================== 00:16:11.637 Latency(us) 00:16:11.637 Device Information : IOPS MiB/s Average min max 00:16:11.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2232.69 558.17 58315.94 38165.63 98523.14 00:16:11.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.92 147.48 227273.07 117522.62 348213.45 00:16:11.637 ======================================================== 00:16:11.637 Total : 2822.61 705.65 93627.56 38165.63 348213.45 00:16:11.637 00:16:11.894 18:35:34 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:16:11.894 Initializing NVMe Controllers 00:16:11.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:11.894 Controller IO queue size 128, less than required. 00:16:11.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:11.894 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:11.894 Controller IO queue size 128, less than required. 00:16:11.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:11.894 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:11.894 WARNING: Some requested NVMe devices were skipped 00:16:11.894 No valid NVMe controllers or AIO or URING devices found 00:16:11.894 18:35:34 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:16:14.412 Initializing NVMe Controllers 00:16:14.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:14.412 Controller IO queue size 128, less than required. 00:16:14.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:14.412 Controller IO queue size 128, less than required. 00:16:14.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:14.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:14.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:14.412 Initialization complete. Launching workers. 00:16:14.412 00:16:14.412 ==================== 00:16:14.412 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:14.412 TCP transport: 00:16:14.412 polls: 10727 00:16:14.412 idle_polls: 7302 00:16:14.412 sock_completions: 3425 00:16:14.412 nvme_completions: 7135 00:16:14.412 submitted_requests: 10802 00:16:14.412 queued_requests: 1 00:16:14.412 00:16:14.412 ==================== 00:16:14.412 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:14.412 TCP transport: 00:16:14.412 polls: 10898 00:16:14.412 idle_polls: 7412 00:16:14.412 sock_completions: 3486 00:16:14.412 nvme_completions: 7203 00:16:14.412 submitted_requests: 10822 00:16:14.412 queued_requests: 1 00:16:14.412 ======================================================== 00:16:14.412 Latency(us) 00:16:14.412 Device Information : IOPS MiB/s Average min max 00:16:14.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1782.36 445.59 73075.81 40966.69 111025.36 00:16:14.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1799.35 449.84 71460.75 37363.92 107404.92 00:16:14.412 ======================================================== 00:16:14.412 Total : 3581.71 895.43 72264.45 37363.92 111025.36 00:16:14.412 00:16:14.412 18:35:37 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:14.669 18:35:37 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.669 18:35:37 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:14.669 18:35:37 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:14.669 18:35:37 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:14.669 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.669 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:16:14.669 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.669 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:16:14.669 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.669 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.669 rmmod nvme_tcp 00:16:14.926 rmmod nvme_fabrics 00:16:14.926 rmmod nvme_keyring 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86523 ']' 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86523 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 86523 ']' 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 86523 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86523 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.926 killing process with pid 86523 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86523' 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 86523 00:16:14.926 18:35:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 86523 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:15.490 00:16:15.490 real 0m13.803s 00:16:15.490 user 0m49.218s 00:16:15.490 sys 0m3.930s 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.490 18:35:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:15.490 ************************************ 00:16:15.490 END TEST nvmf_perf 00:16:15.490 ************************************ 00:16:15.748 18:35:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:15.748 18:35:38 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:15.748 18:35:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:15.748 18:35:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.748 18:35:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:15.748 ************************************ 00:16:15.748 START TEST nvmf_fio_host 00:16:15.748 ************************************ 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:15.748 * Looking for test storage... 00:16:15.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.748 18:35:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:15.749 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:16.042 Cannot find device "nvmf_tgt_br" 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.042 Cannot find device "nvmf_tgt_br2" 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:16.042 Cannot find device "nvmf_tgt_br" 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:16.042 Cannot find device "nvmf_tgt_br2" 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.042 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:16.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:16:16.331 00:16:16.331 --- 10.0.0.2 ping statistics --- 00:16:16.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.331 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:16.331 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:16.331 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:16.331 00:16:16.331 --- 10.0.0.3 ping statistics --- 00:16:16.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.331 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:16.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:16.331 00:16:16.331 --- 10.0.0.1 ping statistics --- 00:16:16.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.331 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=86993 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 86993 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 86993 ']' 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.331 18:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.331 [2024-07-15 18:35:38.821555] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:16:16.331 [2024-07-15 18:35:38.821639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.591 [2024-07-15 18:35:38.963877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.591 [2024-07-15 18:35:39.063649] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.591 [2024-07-15 18:35:39.063698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.591 [2024-07-15 18:35:39.063707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.591 [2024-07-15 18:35:39.063715] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.591 [2024-07-15 18:35:39.063722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.591 [2024-07-15 18:35:39.063874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.591 [2024-07-15 18:35:39.064024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.591 [2024-07-15 18:35:39.064864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.591 [2024-07-15 18:35:39.064866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.160 18:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.160 18:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:16:17.160 18:35:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:17.418 [2024-07-15 18:35:39.906513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.418 18:35:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:17.418 18:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.418 18:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.418 18:35:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:17.676 Malloc1 00:16:17.676 18:35:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:17.935 18:35:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.193 18:35:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.193 [2024-07-15 18:35:40.787383] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.452 18:35:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:18.452 18:35:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:18.452 18:35:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:18.452 18:35:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:18.452 18:35:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:18.452 18:35:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:18.452 18:35:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:18.452 18:35:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:18.452 18:35:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:18.452 18:35:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:18.452 18:35:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:18.711 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:18.711 fio-3.35 00:16:18.711 Starting 1 thread 00:16:21.259 00:16:21.259 test: (groupid=0, jobs=1): err= 0: pid=87123: Mon Jul 15 18:35:43 2024 00:16:21.259 read: IOPS=12.1k, BW=47.1MiB/s (49.4MB/s)(94.4MiB/2005msec) 00:16:21.259 slat (nsec): min=1566, max=444796, avg=1748.78, stdev=3534.00 00:16:21.259 clat (usec): min=3991, max=12678, avg=5562.54, stdev=436.82 00:16:21.259 lat (usec): min=3993, max=12684, avg=5564.29, stdev=437.14 00:16:21.259 clat percentiles (usec): 00:16:21.259 | 1.00th=[ 4817], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5276], 00:16:21.259 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5604], 00:16:21.259 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5932], 95.00th=[ 6063], 00:16:21.259 | 99.00th=[ 6587], 99.50th=[ 8225], 99.90th=[10028], 99.95th=[11338], 00:16:21.259 | 99.99th=[12387] 00:16:21.259 bw ( KiB/s): min=46952, max=48888, per=99.52%, avg=47992.00, stdev=976.00, samples=3 00:16:21.259 iops : min=11738, max=12222, avg=11998.00, stdev=244.00, samples=3 00:16:21.259 write: IOPS=12.0k, BW=46.9MiB/s (49.2MB/s)(94.0MiB/2005msec); 0 zone resets 00:16:21.259 slat (nsec): min=1640, max=301464, avg=1818.36, stdev=2185.49 00:16:21.259 clat (usec): min=3027, max=9656, avg=5044.20, stdev=336.36 00:16:21.259 lat (usec): min=3043, max=9657, avg=5046.02, stdev=336.64 00:16:21.259 clat percentiles (usec): 00:16:21.259 | 1.00th=[ 4359], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:16:21.259 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5014], 60.00th=[ 5080], 00:16:21.259 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5473], 00:16:21.259 | 99.00th=[ 5800], 99.50th=[ 6194], 99.90th=[ 8455], 99.95th=[ 8717], 00:16:21.259 | 99.99th=[ 9241] 00:16:21.259 bw ( KiB/s): min=47619, max=48792, per=100.00%, avg=48118.33, stdev=605.62, samples=3 00:16:21.259 iops : min=11904, max=12198, avg=12029.33, stdev=151.71, samples=3 00:16:21.259 lat (msec) : 4=0.15%, 10=99.81%, 20=0.05% 00:16:21.259 cpu : usr=65.62%, sys=26.15%, ctx=3, majf=0, minf=6 00:16:21.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:21.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.259 issued rwts: total=24171,24070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.259 00:16:21.259 Run status group 0 (all jobs): 00:16:21.259 READ: bw=47.1MiB/s (49.4MB/s), 47.1MiB/s-47.1MiB/s (49.4MB/s-49.4MB/s), io=94.4MiB (99.0MB), run=2005-2005msec 00:16:21.259 WRITE: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=94.0MiB (98.6MB), run=2005-2005msec 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:21.259 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:21.260 18:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:21.260 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:21.260 fio-3.35 00:16:21.260 Starting 1 thread 00:16:23.854 00:16:23.854 test: (groupid=0, jobs=1): err= 0: pid=87170: Mon Jul 15 18:35:45 2024 00:16:23.854 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(340MiB/2006msec) 00:16:23.854 slat (nsec): min=2470, max=90048, avg=2770.51, stdev=1454.90 00:16:23.854 clat (usec): min=1889, max=15678, avg=6902.14, stdev=1673.58 00:16:23.854 lat (usec): min=1892, max=15693, avg=6904.91, stdev=1673.80 00:16:23.854 clat percentiles (usec): 00:16:23.854 | 1.00th=[ 3720], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5342], 00:16:23.854 | 30.00th=[ 5866], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7373], 00:16:23.854 | 70.00th=[ 7963], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 9503], 00:16:23.854 | 99.00th=[11207], 99.50th=[11731], 99.90th=[14746], 99.95th=[15139], 00:16:23.854 | 99.99th=[15664] 00:16:23.854 bw ( KiB/s): min=82432, max=96832, per=50.68%, avg=88008.00, stdev=6432.26, samples=4 00:16:23.854 iops : min= 5152, max= 6052, avg=5500.50, stdev=402.02, samples=4 00:16:23.854 write: IOPS=6542, BW=102MiB/s (107MB/s)(180MiB/1757msec); 0 zone resets 00:16:23.854 slat (usec): min=28, max=442, avg=30.38, stdev= 8.18 00:16:23.854 clat (usec): min=3998, max=17352, avg=8530.45, stdev=1553.64 00:16:23.854 lat (usec): min=4027, max=17469, avg=8560.83, stdev=1556.05 00:16:23.854 clat percentiles (usec): 00:16:23.854 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 7242], 00:16:23.854 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8717], 00:16:23.854 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:16:23.854 | 99.00th=[13304], 99.50th=[13960], 99.90th=[16712], 99.95th=[17171], 00:16:23.855 | 99.99th=[17433] 00:16:23.855 bw ( KiB/s): min=85952, max=100800, per=87.45%, avg=91544.00, stdev=6724.13, samples=4 00:16:23.855 iops : min= 5372, max= 6300, avg=5721.50, stdev=420.26, samples=4 00:16:23.855 lat (msec) : 2=0.01%, 4=1.45%, 10=90.99%, 20=7.55% 00:16:23.855 cpu : usr=72.57%, sys=18.15%, ctx=5, majf=0, minf=18 00:16:23.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:23.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:23.855 issued rwts: total=21771,11495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:23.855 00:16:23.855 Run status group 0 (all jobs): 00:16:23.855 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=340MiB (357MB), run=2006-2006msec 00:16:23.855 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=180MiB (188MB), run=1757-1757msec 00:16:23.855 18:35:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:23.855 rmmod nvme_tcp 00:16:23.855 rmmod nvme_fabrics 00:16:23.855 rmmod nvme_keyring 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 86993 ']' 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 86993 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 86993 ']' 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 86993 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86993 00:16:23.855 killing process with pid 86993 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86993' 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 86993 00:16:23.855 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 86993 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:24.114 ************************************ 00:16:24.114 END TEST nvmf_fio_host 00:16:24.114 ************************************ 00:16:24.114 00:16:24.114 real 0m8.469s 00:16:24.114 user 0m33.634s 00:16:24.114 sys 0m2.544s 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.114 18:35:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.114 18:35:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:24.114 18:35:46 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:24.114 18:35:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:24.114 18:35:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.114 18:35:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.114 ************************************ 00:16:24.114 START TEST nvmf_failover 00:16:24.114 ************************************ 00:16:24.114 18:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:24.374 * Looking for test storage... 00:16:24.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.374 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:24.375 Cannot find device "nvmf_tgt_br" 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:24.375 Cannot find device "nvmf_tgt_br2" 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:24.375 Cannot find device "nvmf_tgt_br" 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:24.375 Cannot find device "nvmf_tgt_br2" 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:16:24.375 18:35:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:24.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:24.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:24.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:16:24.634 00:16:24.634 --- 10.0.0.2 ping statistics --- 00:16:24.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.634 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:24.634 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:24.634 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:24.634 00:16:24.634 --- 10.0.0.3 ping statistics --- 00:16:24.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.634 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:16:24.634 00:16:24.634 --- 10.0.0.1 ping statistics --- 00:16:24.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.634 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:24.634 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87390 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87390 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87390 ']' 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.893 18:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:24.893 [2024-07-15 18:35:47.329642] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:16:24.893 [2024-07-15 18:35:47.329720] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.893 [2024-07-15 18:35:47.465612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.153 [2024-07-15 18:35:47.553590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.153 [2024-07-15 18:35:47.553853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.153 [2024-07-15 18:35:47.554055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.153 [2024-07-15 18:35:47.554100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.153 [2024-07-15 18:35:47.554125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.153 [2024-07-15 18:35:47.554402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.153 [2024-07-15 18:35:47.554541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.153 [2024-07-15 18:35:47.554632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.722 18:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.722 18:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:25.722 18:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.722 18:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.722 18:35:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:25.722 18:35:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.722 18:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:25.982 [2024-07-15 18:35:48.410724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.982 18:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:26.241 Malloc0 00:16:26.241 18:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:26.241 18:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.501 18:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.760 [2024-07-15 18:35:49.205737] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.760 18:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:27.019 [2024-07-15 18:35:49.389551] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:27.019 [2024-07-15 18:35:49.573418] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87496 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87496 /var/tmp/bdevperf.sock 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87496 ']' 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.019 18:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:27.956 18:35:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.956 18:35:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:27.956 18:35:50 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:28.215 NVMe0n1 00:16:28.215 18:35:50 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:28.474 00:16:28.474 18:35:51 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87538 00:16:28.474 18:35:51 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:28.474 18:35:51 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:29.850 18:35:52 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.850 [2024-07-15 18:35:52.268850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.268993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.850 [2024-07-15 18:35:52.269237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 [2024-07-15 18:35:52.269885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eff80 is same with the state(5) to be set 00:16:29.851 18:35:52 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:33.174 18:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:33.174 00:16:33.174 18:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:33.174 [2024-07-15 18:35:55.750591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.174 [2024-07-15 18:35:55.750794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 [2024-07-15 18:35:55.750919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f0e10 is same with the state(5) to be set 00:16:33.175 18:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:36.501 18:35:58 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.501 [2024-07-15 18:35:58.956287] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.501 18:35:58 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:37.437 18:35:59 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:37.696 [2024-07-15 18:36:00.162580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.696 [2024-07-15 18:36:00.162819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.162995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 [2024-07-15 18:36:00.163262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f19a0 is same with the state(5) to be set 00:16:37.697 18:36:00 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 87538 00:16:44.261 0 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 87496 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87496 ']' 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87496 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87496 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:44.261 killing process with pid 87496 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87496' 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87496 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87496 00:16:44.261 18:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:44.261 [2024-07-15 18:35:49.630369] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:16:44.261 [2024-07-15 18:35:49.630517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87496 ] 00:16:44.261 [2024-07-15 18:35:49.761150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.261 [2024-07-15 18:35:49.855509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.261 Running I/O for 15 seconds... 00:16:44.261 [2024-07-15 18:35:52.270203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.261 [2024-07-15 18:35:52.270243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.261 [2024-07-15 18:35:52.270268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.261 [2024-07-15 18:35:52.270281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.261 [2024-07-15 18:35:52.270296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.261 [2024-07-15 18:35:52.270309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.270975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.270989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.271001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.271015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.271027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.271041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.271054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.271068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.271080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.271102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.271115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.271130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.271142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.271156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.271168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.271182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.271194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.271208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.271220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.262 [2024-07-15 18:35:52.271239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.262 [2024-07-15 18:35:52.271251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.271978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.271991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.272006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.272019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.272032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.272044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.272058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.272071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.272084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.272097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.272110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.272123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.272137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.272149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.272162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.263 [2024-07-15 18:35:52.272175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.263 [2024-07-15 18:35:52.272188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.272978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.272991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.264 [2024-07-15 18:35:52.273004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.264 [2024-07-15 18:35:52.273017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.265 [2024-07-15 18:35:52.273670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2111c90 is same with the state(5) to be set 00:16:44.265 [2024-07-15 18:35:52.273698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.265 [2024-07-15 18:35:52.273707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.265 [2024-07-15 18:35:52.273717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108280 len:8 PRP1 0x0 PRP2 0x0 00:16:44.265 [2024-07-15 18:35:52.273729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273779] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2111c90 was disconnected and freed. reset controller. 00:16:44.265 [2024-07-15 18:35:52.273794] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:44.265 [2024-07-15 18:35:52.273841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.265 [2024-07-15 18:35:52.273855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.265 [2024-07-15 18:35:52.273880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.265 [2024-07-15 18:35:52.273905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.265 [2024-07-15 18:35:52.273930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.265 [2024-07-15 18:35:52.273942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:44.265 [2024-07-15 18:35:52.276668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:44.265 [2024-07-15 18:35:52.276704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095e30 (9): Bad file descriptor 00:16:44.265 [2024-07-15 18:35:52.308328] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:44.265 [2024-07-15 18:35:55.751052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.266 [2024-07-15 18:35:55.751704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.266 [2024-07-15 18:35:55.751730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.266 [2024-07-15 18:35:55.751756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.266 [2024-07-15 18:35:55.751782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.266 [2024-07-15 18:35:55.751807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.266 [2024-07-15 18:35:55.751838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.266 [2024-07-15 18:35:55.751864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.266 [2024-07-15 18:35:55.751890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.266 [2024-07-15 18:35:55.751916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.266 [2024-07-15 18:35:55.751941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.266 [2024-07-15 18:35:55.751955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.751967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.751981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.751996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.267 [2024-07-15 18:35:55.752679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.267 [2024-07-15 18:35:55.752691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.752985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.752999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.268 [2024-07-15 18:35:55.753540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.268 [2024-07-15 18:35:55.753554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.753983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.753995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.754020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.754046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.269 [2024-07-15 18:35:55.754072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.269 [2024-07-15 18:35:55.754391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.269 [2024-07-15 18:35:55.754405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.270 [2024-07-15 18:35:55.754417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:35:55.754431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.270 [2024-07-15 18:35:55.754443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:35:55.754457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.270 [2024-07-15 18:35:55.754470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:35:55.754483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.270 [2024-07-15 18:35:55.754496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:35:55.754529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.270 [2024-07-15 18:35:55.754539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.270 [2024-07-15 18:35:55.754549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55152 len:8 PRP1 0x0 PRP2 0x0 00:16:44.270 [2024-07-15 18:35:55.754562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:35:55.754618] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2113b80 was disconnected and freed. reset controller. 00:16:44.270 [2024-07-15 18:35:55.754634] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:44.270 [2024-07-15 18:35:55.754676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.270 [2024-07-15 18:35:55.754691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:35:55.754704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.270 [2024-07-15 18:35:55.754717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:35:55.754730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.270 [2024-07-15 18:35:55.754743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:35:55.754756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.270 [2024-07-15 18:35:55.754768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:35:55.754780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:44.270 [2024-07-15 18:35:55.757515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:44.270 [2024-07-15 18:35:55.757551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095e30 (9): Bad file descriptor 00:16:44.270 [2024-07-15 18:35:55.787707] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:44.270 [2024-07-15 18:36:00.163219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.270 [2024-07-15 18:36:00.163262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.163278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.270 [2024-07-15 18:36:00.163291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.163304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.270 [2024-07-15 18:36:00.163317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.163329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.270 [2024-07-15 18:36:00.163341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.163354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095e30 is same with the state(5) to be set 00:16:44.270 [2024-07-15 18:36:00.164734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.164781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.164801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.164814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.164828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.164840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.164854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.164866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.164880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.164892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.164905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.164917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.164932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.164944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.164957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.164969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.164983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.164995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.165009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.165021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.165035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.165047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.165061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.165073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.165088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.165101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.165121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.165134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.165148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.165160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.270 [2024-07-15 18:36:00.165174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.270 [2024-07-15 18:36:00.165187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.271 [2024-07-15 18:36:00.165645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.165979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.165991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.271 [2024-07-15 18:36:00.166005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.271 [2024-07-15 18:36:00.166017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.272 [2024-07-15 18:36:00.166711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.272 [2024-07-15 18:36:00.166723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.166737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.166749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.166763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.166775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.166789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.166801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.166819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.166832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.166845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.166857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.166871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.166885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.166899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.166911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.166924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.166937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.166950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.166963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.166976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.166988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.273 [2024-07-15 18:36:00.167622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.273 [2024-07-15 18:36:00.167636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.274 [2024-07-15 18:36:00.167648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.167662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.274 [2024-07-15 18:36:00.167674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.167688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:44.274 [2024-07-15 18:36:00.167700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.167725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.167736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96544 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.167748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.167765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.167774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.167784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96552 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.167796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.167808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.167817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.167826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96560 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.167838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.167851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.167864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.167874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96568 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.167886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.167899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.167908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.167918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96576 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.167930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.167942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.167951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.167960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96584 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.167972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.167985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.167994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.168003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96592 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.168015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.168028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.168037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.168046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96600 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.168058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.168070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.168079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.168088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.168101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.168113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.168122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.168131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96616 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.168144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.168157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.168166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.168175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.168187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.168203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.168213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.168222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.168234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.168250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.168259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.168268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95880 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.168280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.168293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.187247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.187285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95888 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.187304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.187325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.187339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.187352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95896 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.187368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.187384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.187396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.187409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95904 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.187425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.187441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.187453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.187466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95912 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.187482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.187499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.187511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.187524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95920 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.187540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.187557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:44.274 [2024-07-15 18:36:00.187583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:44.274 [2024-07-15 18:36:00.187596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95928 len:8 PRP1 0x0 PRP2 0x0 00:16:44.274 [2024-07-15 18:36:00.187626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.274 [2024-07-15 18:36:00.187687] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2122a60 was disconnected and freed. reset controller. 00:16:44.274 [2024-07-15 18:36:00.187706] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:44.274 [2024-07-15 18:36:00.187724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:44.274 [2024-07-15 18:36:00.187777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2095e30 (9): Bad file descriptor 00:16:44.274 [2024-07-15 18:36:00.191429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:44.274 [2024-07-15 18:36:00.226408] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:44.274 00:16:44.274 Latency(us) 00:16:44.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.274 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:44.274 Verification LBA range: start 0x0 length 0x4000 00:16:44.274 NVMe0n1 : 15.00 11843.34 46.26 269.63 0.00 10545.61 434.27 32004.73 00:16:44.274 =================================================================================================================== 00:16:44.274 Total : 11843.34 46.26 269.63 0.00 10545.61 434.27 32004.73 00:16:44.274 Received shutdown signal, test time was about 15.000000 seconds 00:16:44.274 00:16:44.274 Latency(us) 00:16:44.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.274 =================================================================================================================== 00:16:44.274 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87746 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87746 /var/tmp/bdevperf.sock 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87746 ']' 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.275 18:36:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:44.853 18:36:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.853 18:36:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:44.853 18:36:07 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:44.853 [2024-07-15 18:36:07.455373] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:45.110 18:36:07 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:45.110 [2024-07-15 18:36:07.651343] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:45.110 18:36:07 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:45.368 NVMe0n1 00:16:45.369 18:36:07 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:45.626 00:16:45.626 18:36:08 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:45.885 00:16:45.885 18:36:08 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:45.885 18:36:08 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:46.141 18:36:08 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:46.398 18:36:08 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:49.680 18:36:11 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:49.680 18:36:11 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:49.680 18:36:12 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:49.680 18:36:12 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=87882 00:16:49.680 18:36:12 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 87882 00:16:50.639 0 00:16:50.639 18:36:13 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:50.639 [2024-07-15 18:36:06.429533] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:16:50.639 [2024-07-15 18:36:06.429679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87746 ] 00:16:50.639 [2024-07-15 18:36:06.560282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.639 [2024-07-15 18:36:06.642325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.639 [2024-07-15 18:36:08.814645] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:50.639 [2024-07-15 18:36:08.814740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.639 [2024-07-15 18:36:08.814759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.639 [2024-07-15 18:36:08.814774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.639 [2024-07-15 18:36:08.814787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.639 [2024-07-15 18:36:08.814800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.639 [2024-07-15 18:36:08.814812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.639 [2024-07-15 18:36:08.814825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.639 [2024-07-15 18:36:08.814837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.639 [2024-07-15 18:36:08.814849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:50.639 [2024-07-15 18:36:08.814879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:50.639 [2024-07-15 18:36:08.814900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182ee30 (9): Bad file descriptor 00:16:50.639 [2024-07-15 18:36:08.819371] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:50.639 Running I/O for 1 seconds... 00:16:50.639 00:16:50.639 Latency(us) 00:16:50.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.639 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:50.639 Verification LBA range: start 0x0 length 0x4000 00:16:50.639 NVMe0n1 : 1.01 12017.53 46.94 0.00 0.00 10593.84 1566.02 10843.71 00:16:50.639 =================================================================================================================== 00:16:50.639 Total : 12017.53 46.94 0.00 0.00 10593.84 1566.02 10843.71 00:16:50.639 18:36:13 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:50.639 18:36:13 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:50.897 18:36:13 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:51.154 18:36:13 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:51.154 18:36:13 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:51.413 18:36:13 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:51.413 18:36:13 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:54.702 18:36:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:54.702 18:36:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 87746 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87746 ']' 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87746 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87746 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87746' 00:16:54.702 killing process with pid 87746 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87746 00:16:54.702 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87746 00:16:54.961 18:36:17 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:54.961 18:36:17 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.220 rmmod nvme_tcp 00:16:55.220 rmmod nvme_fabrics 00:16:55.220 rmmod nvme_keyring 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87390 ']' 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87390 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87390 ']' 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87390 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87390 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87390' 00:16:55.220 killing process with pid 87390 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87390 00:16:55.220 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87390 00:16:55.479 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.479 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.479 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.479 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.479 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.479 18:36:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.479 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.479 18:36:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.479 18:36:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:55.479 00:16:55.479 real 0m31.341s 00:16:55.479 user 1m59.837s 00:16:55.479 sys 0m5.247s 00:16:55.479 18:36:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.479 18:36:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:55.479 ************************************ 00:16:55.479 END TEST nvmf_failover 00:16:55.479 ************************************ 00:16:55.737 18:36:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:55.737 18:36:18 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:55.737 18:36:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:55.737 18:36:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.737 18:36:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:55.737 ************************************ 00:16:55.737 START TEST nvmf_host_discovery 00:16:55.737 ************************************ 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:55.737 * Looking for test storage... 00:16:55.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:55.737 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:55.737 Cannot find device "nvmf_tgt_br" 00:16:55.738 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:55.738 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.738 Cannot find device "nvmf_tgt_br2" 00:16:55.738 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:55.738 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:55.738 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:55.997 Cannot find device "nvmf_tgt_br" 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:55.997 Cannot find device "nvmf_tgt_br2" 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:55.997 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:56.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:16:56.256 00:16:56.256 --- 10.0.0.2 ping statistics --- 00:16:56.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.256 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:56.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:16:56.256 00:16:56.256 --- 10.0.0.3 ping statistics --- 00:16:56.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.256 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:56.256 00:16:56.256 --- 10.0.0.1 ping statistics --- 00:16:56.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.256 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88179 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88179 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88179 ']' 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.256 18:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.256 [2024-07-15 18:36:18.747858] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:16:56.256 [2024-07-15 18:36:18.747914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.515 [2024-07-15 18:36:18.890423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.515 [2024-07-15 18:36:18.972147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.515 [2024-07-15 18:36:18.972206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.515 [2024-07-15 18:36:18.972215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.515 [2024-07-15 18:36:18.972223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.515 [2024-07-15 18:36:18.972230] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.515 [2024-07-15 18:36:18.972276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.083 [2024-07-15 18:36:19.661534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.083 [2024-07-15 18:36:19.673646] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.083 null0 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.083 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.342 null1 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88227 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88227 /tmp/host.sock 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88227 ']' 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:57.342 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.342 18:36:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.342 [2024-07-15 18:36:19.769979] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:16:57.342 [2024-07-15 18:36:19.770221] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88227 ] 00:16:57.343 [2024-07-15 18:36:19.913045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.602 [2024-07-15 18:36:20.007675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:58.170 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.430 [2024-07-15 18:36:20.943798] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:58.430 18:36:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.430 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:58.430 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:58.430 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:58.430 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.430 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:58.430 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.430 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.430 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:58.430 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.689 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:58.689 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:58.689 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:58.689 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:58.689 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:58.689 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:58.689 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:58.689 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:58.689 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:16:58.690 18:36:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:16:59.257 [2024-07-15 18:36:21.650872] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:59.257 [2024-07-15 18:36:21.650910] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:59.257 [2024-07-15 18:36:21.650924] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:59.257 [2024-07-15 18:36:21.736842] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:59.257 [2024-07-15 18:36:21.793663] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:59.257 [2024-07-15 18:36:21.793704] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:59.825 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:59.826 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.086 [2024-07-15 18:36:22.486065] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:00.086 [2024-07-15 18:36:22.486767] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:00.086 [2024-07-15 18:36:22.486800] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.086 [2024-07-15 18:36:22.572675] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:00.086 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.086 [2024-07-15 18:36:22.632826] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:00.086 [2024-07-15 18:36:22.632851] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:00.086 [2024-07-15 18:36:22.632857] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:00.087 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:17:00.087 18:36:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:01.464 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.465 [2024-07-15 18:36:23.768630] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:01.465 [2024-07-15 18:36:23.768660] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:01.465 [2024-07-15 18:36:23.777965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.465 [2024-07-15 18:36:23.777991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.465 [2024-07-15 18:36:23.778003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.465 [2024-07-15 18:36:23.778012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.465 [2024-07-15 18:36:23.778022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.465 [2024-07-15 18:36:23.778030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.465 [2024-07-15 18:36:23.778039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.465 [2024-07-15 18:36:23.778048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.465 [2024-07-15 18:36:23.778057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84c50 is same with the state(5) to be set 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.465 [2024-07-15 18:36:23.787915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d84c50 (9): Bad file descriptor 00:17:01.465 [2024-07-15 18:36:23.797915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:01.465 [2024-07-15 18:36:23.798125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.465 [2024-07-15 18:36:23.798210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d84c50 with addr=10.0.0.2, port=4420 00:17:01.465 [2024-07-15 18:36:23.798322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84c50 is same with the state(5) to be set 00:17:01.465 [2024-07-15 18:36:23.798345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d84c50 (9): Bad file descriptor 00:17:01.465 [2024-07-15 18:36:23.798359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:01.465 [2024-07-15 18:36:23.798367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:01.465 [2024-07-15 18:36:23.798378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:01.465 [2024-07-15 18:36:23.798392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.465 [2024-07-15 18:36:23.808056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:01.465 [2024-07-15 18:36:23.808132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.465 [2024-07-15 18:36:23.808147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d84c50 with addr=10.0.0.2, port=4420 00:17:01.465 [2024-07-15 18:36:23.808156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84c50 is same with the state(5) to be set 00:17:01.465 [2024-07-15 18:36:23.808169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d84c50 (9): Bad file descriptor 00:17:01.465 [2024-07-15 18:36:23.808180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:01.465 [2024-07-15 18:36:23.808189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:01.465 [2024-07-15 18:36:23.808197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:01.465 [2024-07-15 18:36:23.808209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:01.465 [2024-07-15 18:36:23.818083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:01.465 [2024-07-15 18:36:23.818153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.465 [2024-07-15 18:36:23.818167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d84c50 with addr=10.0.0.2, port=4420 00:17:01.465 [2024-07-15 18:36:23.818176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84c50 is same with the state(5) to be set 00:17:01.465 [2024-07-15 18:36:23.818189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d84c50 (9): Bad file descriptor 00:17:01.465 [2024-07-15 18:36:23.818201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:01.465 [2024-07-15 18:36:23.818209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:01.465 [2024-07-15 18:36:23.818218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:01.465 [2024-07-15 18:36:23.818229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.465 [2024-07-15 18:36:23.828111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:01.465 [2024-07-15 18:36:23.828165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.465 [2024-07-15 18:36:23.828178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d84c50 with addr=10.0.0.2, port=4420 00:17:01.465 [2024-07-15 18:36:23.828187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84c50 is same with the state(5) to be set 00:17:01.465 [2024-07-15 18:36:23.828199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d84c50 (9): Bad file descriptor 00:17:01.465 [2024-07-15 18:36:23.828211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:01.465 [2024-07-15 18:36:23.828219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:01.465 [2024-07-15 18:36:23.828228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:01.465 [2024-07-15 18:36:23.828238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:01.465 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.466 [2024-07-15 18:36:23.838129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:01.466 [2024-07-15 18:36:23.838186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.466 [2024-07-15 18:36:23.838199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d84c50 with addr=10.0.0.2, port=4420 00:17:01.466 [2024-07-15 18:36:23.838208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84c50 is same with the state(5) to be set 00:17:01.466 [2024-07-15 18:36:23.838220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d84c50 (9): Bad file descriptor 00:17:01.466 [2024-07-15 18:36:23.838231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:01.466 [2024-07-15 18:36:23.838239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:01.466 [2024-07-15 18:36:23.838248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:01.466 [2024-07-15 18:36:23.838258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:01.466 [2024-07-15 18:36:23.848151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:01.466 [2024-07-15 18:36:23.848220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.466 [2024-07-15 18:36:23.848235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d84c50 with addr=10.0.0.2, port=4420 00:17:01.466 [2024-07-15 18:36:23.848244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84c50 is same with the state(5) to be set 00:17:01.466 [2024-07-15 18:36:23.848257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d84c50 (9): Bad file descriptor 00:17:01.466 [2024-07-15 18:36:23.848269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:01.466 [2024-07-15 18:36:23.848277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:01.466 [2024-07-15 18:36:23.848286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:01.466 [2024-07-15 18:36:23.848297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:01.466 [2024-07-15 18:36:23.856020] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:01.466 [2024-07-15 18:36:23.856043] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.466 18:36:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.466 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.737 18:36:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.689 [2024-07-15 18:36:25.175466] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:02.689 [2024-07-15 18:36:25.175493] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:02.689 [2024-07-15 18:36:25.175508] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:02.689 [2024-07-15 18:36:25.261419] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:02.947 [2024-07-15 18:36:25.321270] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:02.947 [2024-07-15 18:36:25.321318] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.947 2024/07/15 18:36:25 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:02.947 request: 00:17:02.947 { 00:17:02.947 "method": "bdev_nvme_start_discovery", 00:17:02.947 "params": { 00:17:02.947 "name": "nvme", 00:17:02.947 "trtype": "tcp", 00:17:02.947 "traddr": "10.0.0.2", 00:17:02.947 "adrfam": "ipv4", 00:17:02.947 "trsvcid": "8009", 00:17:02.947 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:02.947 "wait_for_attach": true 00:17:02.947 } 00:17:02.947 } 00:17:02.947 Got JSON-RPC error response 00:17:02.947 GoRPCClient: error on JSON-RPC call 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:02.947 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.948 2024/07/15 18:36:25 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:02.948 request: 00:17:02.948 { 00:17:02.948 "method": "bdev_nvme_start_discovery", 00:17:02.948 "params": { 00:17:02.948 "name": "nvme_second", 00:17:02.948 "trtype": "tcp", 00:17:02.948 "traddr": "10.0.0.2", 00:17:02.948 "adrfam": "ipv4", 00:17:02.948 "trsvcid": "8009", 00:17:02.948 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:02.948 "wait_for_attach": true 00:17:02.948 } 00:17:02.948 } 00:17:02.948 Got JSON-RPC error response 00:17:02.948 GoRPCClient: error on JSON-RPC call 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.948 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.205 18:36:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.138 [2024-07-15 18:36:26.596684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:04.138 [2024-07-15 18:36:26.596746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d80f00 with addr=10.0.0.2, port=8010 00:17:04.138 [2024-07-15 18:36:26.596767] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:04.138 [2024-07-15 18:36:26.596777] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:04.138 [2024-07-15 18:36:26.596786] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:05.099 [2024-07-15 18:36:27.595054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:05.099 [2024-07-15 18:36:27.595119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d80f00 with addr=10.0.0.2, port=8010 00:17:05.099 [2024-07-15 18:36:27.595146] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:05.099 [2024-07-15 18:36:27.595156] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:05.099 [2024-07-15 18:36:27.595165] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:06.030 [2024-07-15 18:36:28.593307] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:06.030 2024/07/15 18:36:28 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:17:06.030 request: 00:17:06.030 { 00:17:06.031 "method": "bdev_nvme_start_discovery", 00:17:06.031 "params": { 00:17:06.031 "name": "nvme_second", 00:17:06.031 "trtype": "tcp", 00:17:06.031 "traddr": "10.0.0.2", 00:17:06.031 "adrfam": "ipv4", 00:17:06.031 "trsvcid": "8010", 00:17:06.031 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:06.031 "wait_for_attach": false, 00:17:06.031 "attach_timeout_ms": 3000 00:17:06.031 } 00:17:06.031 } 00:17:06.031 Got JSON-RPC error response 00:17:06.031 GoRPCClient: error on JSON-RPC call 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:06.031 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88227 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.288 rmmod nvme_tcp 00:17:06.288 rmmod nvme_fabrics 00:17:06.288 rmmod nvme_keyring 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88179 ']' 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88179 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88179 ']' 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88179 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88179 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88179' 00:17:06.288 killing process with pid 88179 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88179 00:17:06.288 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88179 00:17:06.544 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.544 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.544 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.544 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.544 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.544 18:36:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.544 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.544 18:36:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.544 18:36:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:06.544 ************************************ 00:17:06.544 END TEST nvmf_host_discovery 00:17:06.544 ************************************ 00:17:06.544 00:17:06.544 real 0m10.920s 00:17:06.544 user 0m20.726s 00:17:06.544 sys 0m2.244s 00:17:06.544 18:36:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.544 18:36:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:06.544 18:36:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:06.544 18:36:29 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:06.544 18:36:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.544 18:36:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.544 18:36:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.544 ************************************ 00:17:06.544 START TEST nvmf_host_multipath_status 00:17:06.544 ************************************ 00:17:06.544 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:06.802 * Looking for test storage... 00:17:06.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:06.802 Cannot find device "nvmf_tgt_br" 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.802 Cannot find device "nvmf_tgt_br2" 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:06.802 Cannot find device "nvmf_tgt_br" 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:06.802 Cannot find device "nvmf_tgt_br2" 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:17:06.802 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:07.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:17:07.060 00:17:07.060 --- 10.0.0.2 ping statistics --- 00:17:07.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.060 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:07.060 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.060 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:07.060 00:17:07.060 --- 10.0.0.3 ping statistics --- 00:17:07.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.060 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:07.060 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:17:07.317 00:17:07.317 --- 10.0.0.1 ping statistics --- 00:17:07.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.317 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=88713 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 88713 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 88713 ']' 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.317 18:36:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:07.317 [2024-07-15 18:36:29.771162] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:17:07.318 [2024-07-15 18:36:29.771226] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.318 [2024-07-15 18:36:29.913563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:07.574 [2024-07-15 18:36:29.994228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.574 [2024-07-15 18:36:29.994266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.574 [2024-07-15 18:36:29.994276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.574 [2024-07-15 18:36:29.994300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.574 [2024-07-15 18:36:29.994307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.574 [2024-07-15 18:36:29.994507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.574 [2024-07-15 18:36:29.994507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.140 18:36:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.140 18:36:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:08.140 18:36:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.140 18:36:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.140 18:36:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:08.140 18:36:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.140 18:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=88713 00:17:08.140 18:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:08.399 [2024-07-15 18:36:30.901709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.399 18:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:08.701 Malloc0 00:17:08.701 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:08.981 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:08.981 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.239 [2024-07-15 18:36:31.705712] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.240 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:09.499 [2024-07-15 18:36:31.893500] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:09.499 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:09.499 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=88810 00:17:09.499 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.499 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 88810 /var/tmp/bdevperf.sock 00:17:09.499 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 88810 ']' 00:17:09.499 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.499 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.499 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.499 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.499 18:36:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:10.434 18:36:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.434 18:36:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:10.434 18:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:10.693 18:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:10.950 Nvme0n1 00:17:10.950 18:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:11.208 Nvme0n1 00:17:11.208 18:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:11.208 18:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:13.110 18:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:13.110 18:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:13.369 18:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:13.628 18:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:14.563 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:14.563 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:14.563 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.563 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:14.821 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.821 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:14.822 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.822 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:15.079 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:15.079 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:15.079 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.079 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:15.336 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.337 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:15.337 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.337 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:15.337 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.337 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:15.337 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.337 18:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:15.594 18:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.594 18:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:15.594 18:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.594 18:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:15.852 18:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.852 18:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:15.852 18:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:16.111 18:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:16.396 18:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:17.333 18:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:17.333 18:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:17.333 18:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.333 18:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:17.592 18:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:17.592 18:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:17.592 18:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.592 18:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:17.592 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.592 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:17.592 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.592 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:17.850 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.850 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:17.850 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.850 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:18.109 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:18.109 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:18.109 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:18.109 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:18.368 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:18.368 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:18.368 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:18.368 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:18.368 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:18.368 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:18.368 18:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:18.627 18:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:18.885 18:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:19.822 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:19.822 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:19.823 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.823 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:20.080 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.080 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:20.080 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.080 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:20.339 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:20.339 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:20.339 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.339 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:20.339 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.339 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:20.339 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.339 18:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:20.598 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.598 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:20.598 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:20.598 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.855 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.855 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:20.855 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.855 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:21.113 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:21.113 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:21.113 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:21.372 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:21.372 18:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:22.746 18:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:22.746 18:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:22.746 18:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.747 18:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:22.747 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:22.747 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:22.747 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.747 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:22.747 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:22.747 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:22.747 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.747 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:23.005 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:23.005 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:23.005 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.005 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:23.323 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:23.323 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:23.323 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.323 18:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:23.596 18:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:23.596 18:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:23.596 18:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.596 18:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:23.854 18:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:23.854 18:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:23.854 18:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:23.854 18:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:24.113 18:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:25.047 18:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:25.047 18:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:25.047 18:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.047 18:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:25.305 18:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:25.305 18:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:25.305 18:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.305 18:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:25.563 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:25.563 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:25.563 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:25.563 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.822 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:25.822 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:25.822 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.822 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:25.822 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:25.822 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:25.822 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.822 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:26.079 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:26.079 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:26.079 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:26.079 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:26.336 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:26.336 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:26.336 18:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:26.594 18:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:26.853 18:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:27.791 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:27.791 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:27.791 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.791 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:28.050 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:28.050 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:28.050 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:28.050 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.050 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.050 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:28.050 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.050 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:28.310 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.310 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:28.310 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:28.310 18:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.569 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.569 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:28.569 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.569 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:28.828 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:28.828 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:28.828 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:28.828 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:29.087 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:29.087 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:29.087 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:29.087 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:29.346 18:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:29.605 18:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:30.542 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:30.542 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:30.542 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.542 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:30.801 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:30.801 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:30.801 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.801 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:31.061 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:31.061 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:31.061 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:31.061 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:31.061 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:31.061 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:31.061 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:31.061 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:31.320 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:31.320 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:31.320 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:31.320 18:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:31.578 18:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:31.578 18:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:31.578 18:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:31.578 18:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:31.836 18:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:31.836 18:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:31.836 18:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:32.095 18:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:32.095 18:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:33.471 18:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:33.471 18:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:33.471 18:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.471 18:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:33.471 18:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:33.471 18:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:33.471 18:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.471 18:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:33.471 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.471 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:33.471 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.471 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:33.729 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.729 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:33.729 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.729 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:34.002 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.002 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:34.002 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.002 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:34.260 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.260 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:34.260 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.260 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:34.519 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.519 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:34.519 18:36:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:34.519 18:36:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:34.779 18:36:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:35.772 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:35.773 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:35.773 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.773 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:36.031 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.031 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:36.031 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.031 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:36.290 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.290 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:36.290 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.290 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:36.290 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.291 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:36.291 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:36.291 18:36:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.550 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.550 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:36.550 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.550 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:36.809 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.809 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:36.809 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.809 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:37.069 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.069 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:37.069 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:37.069 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:37.328 18:36:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:38.705 18:37:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:38.705 18:37:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:38.705 18:37:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:38.705 18:37:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.705 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:38.705 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:38.705 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.705 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:38.705 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:38.705 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:38.705 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.705 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:38.962 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:38.962 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:38.962 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.962 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:39.220 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.220 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:39.220 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.220 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:39.479 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.479 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:39.479 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.479 18:37:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 88810 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 88810 ']' 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 88810 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88810 00:17:39.739 killing process with pid 88810 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88810' 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 88810 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 88810 00:17:39.739 Connection closed with partial response: 00:17:39.739 00:17:39.739 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 88810 00:17:39.739 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:39.739 [2024-07-15 18:36:31.951384] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:17:39.739 [2024-07-15 18:36:31.951465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88810 ] 00:17:39.739 [2024-07-15 18:36:32.081576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.739 [2024-07-15 18:36:32.164162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.739 Running I/O for 90 seconds... 00:17:39.739 [2024-07-15 18:36:46.396233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.739 [2024-07-15 18:36:46.396298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.396979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.396991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.397008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.397020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.397044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.397057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.397074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.397088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:39.739 [2024-07-15 18:36:46.397106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.739 [2024-07-15 18:36:46.397118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.397977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.397989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.398623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.398635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.399594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.399613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.399710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.740 [2024-07-15 18:36:46.399726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:39.740 [2024-07-15 18:36:46.399752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.399765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.399789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.399801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.399825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.399841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.399865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.399878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.399902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.399915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.399946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.399959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.399983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.399995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:46.400515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.741 [2024-07-15 18:36:46.400552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:46.400587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.741 [2024-07-15 18:36:46.400600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.855537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.855607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.741 [2024-07-15 18:36:59.856296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:39.741 [2024-07-15 18:36:59.856738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.741 [2024-07-15 18:36:59.856779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.856798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.856810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.856828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.856841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.856859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.856871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.856889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.856901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.856919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.856933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.856950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.742 [2024-07-15 18:36:59.856963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.856981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.742 [2024-07-15 18:36:59.856994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.858962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.742 [2024-07-15 18:36:59.858991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.859013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.859026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.859045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.859058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.859075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.859088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.859106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.859119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.859146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.859168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:39.742 [2024-07-15 18:36:59.859186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.742 [2024-07-15 18:36:59.859200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:39.742 Received shutdown signal, test time was about 28.379790 seconds 00:17:39.742 00:17:39.742 Latency(us) 00:17:39.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.742 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:39.742 Verification LBA range: start 0x0 length 0x4000 00:17:39.742 Nvme0n1 : 28.38 11442.91 44.70 0.00 0.00 11165.27 233.59 3018551.31 00:17:39.742 =================================================================================================================== 00:17:39.742 Total : 11442.91 44.70 0.00 0.00 11165.27 233.59 3018551.31 00:17:39.742 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.001 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:40.001 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:40.001 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:40.001 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:40.001 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:17:40.001 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:40.001 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:17:40.001 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.001 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:40.001 rmmod nvme_tcp 00:17:40.001 rmmod nvme_fabrics 00:17:40.001 rmmod nvme_keyring 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 88713 ']' 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 88713 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 88713 ']' 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 88713 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88713 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:40.259 killing process with pid 88713 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88713' 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 88713 00:17:40.259 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 88713 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:40.517 00:17:40.517 real 0m33.816s 00:17:40.517 user 1m46.157s 00:17:40.517 sys 0m10.298s 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.517 ************************************ 00:17:40.517 END TEST nvmf_host_multipath_status 00:17:40.517 18:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:40.517 ************************************ 00:17:40.517 18:37:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:40.517 18:37:02 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:40.517 18:37:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:40.517 18:37:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.517 18:37:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.517 ************************************ 00:17:40.517 START TEST nvmf_discovery_remove_ifc 00:17:40.517 ************************************ 00:17:40.517 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:40.517 * Looking for test storage... 00:17:40.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.776 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:40.777 Cannot find device "nvmf_tgt_br" 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.777 Cannot find device "nvmf_tgt_br2" 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:40.777 Cannot find device "nvmf_tgt_br" 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:40.777 Cannot find device "nvmf_tgt_br2" 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:40.777 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:41.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:41.035 00:17:41.035 --- 10.0.0.2 ping statistics --- 00:17:41.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.035 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:41.035 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:41.035 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:41.035 00:17:41.035 --- 10.0.0.3 ping statistics --- 00:17:41.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.035 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:41.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:41.035 00:17:41.035 --- 10.0.0.1 ping statistics --- 00:17:41.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.035 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90066 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90066 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90066 ']' 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.035 18:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:41.293 [2024-07-15 18:37:03.658278] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:17:41.293 [2024-07-15 18:37:03.658794] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.293 [2024-07-15 18:37:03.802530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.293 [2024-07-15 18:37:03.887242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.293 [2024-07-15 18:37:03.887289] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.293 [2024-07-15 18:37:03.887299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.293 [2024-07-15 18:37:03.887307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.293 [2024-07-15 18:37:03.887313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.293 [2024-07-15 18:37:03.887344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:42.254 [2024-07-15 18:37:04.574911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.254 [2024-07-15 18:37:04.582993] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:42.254 null0 00:17:42.254 [2024-07-15 18:37:04.614924] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90116 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90116 /tmp/host.sock 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90116 ']' 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.254 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.254 18:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:42.254 [2024-07-15 18:37:04.691116] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:17:42.254 [2024-07-15 18:37:04.691193] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90116 ] 00:17:42.254 [2024-07-15 18:37:04.831450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.512 [2024-07-15 18:37:04.919854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.079 18:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:44.052 [2024-07-15 18:37:06.644345] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:44.052 [2024-07-15 18:37:06.644386] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:44.052 [2024-07-15 18:37:06.644399] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:44.311 [2024-07-15 18:37:06.730325] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:44.311 [2024-07-15 18:37:06.787064] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:44.311 [2024-07-15 18:37:06.787127] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:44.311 [2024-07-15 18:37:06.787151] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:44.311 [2024-07-15 18:37:06.787175] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:44.311 [2024-07-15 18:37:06.787198] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:44.311 [2024-07-15 18:37:06.792690] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6c5650 was disconnected and freed. delete nvme_qpair. 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:44.311 18:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:45.687 18:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:45.687 18:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:45.687 18:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.687 18:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:45.687 18:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:45.687 18:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:45.687 18:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:45.687 18:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.687 18:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:45.687 18:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:46.638 18:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:46.638 18:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:46.638 18:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:46.638 18:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.638 18:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:46.638 18:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:46.638 18:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:46.638 18:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.638 18:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:46.638 18:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:47.575 18:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:47.575 18:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:47.575 18:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:47.575 18:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.575 18:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:47.575 18:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:47.575 18:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:47.575 18:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.575 18:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:47.575 18:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:48.559 18:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:48.559 18:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:48.559 18:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:48.559 18:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.559 18:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:48.559 18:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:48.559 18:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:48.559 18:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.559 18:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:48.559 18:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:49.936 18:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:49.936 18:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:49.936 18:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.936 18:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:49.936 18:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:49.936 18:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:49.936 18:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:49.936 18:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.936 18:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:49.936 18:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:49.936 [2024-07-15 18:37:12.206206] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:49.936 [2024-07-15 18:37:12.206337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.936 [2024-07-15 18:37:12.206358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.936 [2024-07-15 18:37:12.206376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.936 [2024-07-15 18:37:12.206388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.936 [2024-07-15 18:37:12.206398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.936 [2024-07-15 18:37:12.206408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.936 [2024-07-15 18:37:12.206420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.936 [2024-07-15 18:37:12.206433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.936 [2024-07-15 18:37:12.206450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.936 [2024-07-15 18:37:12.206459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.936 [2024-07-15 18:37:12.206469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e900 is same with the state(5) to be set 00:17:49.936 [2024-07-15 18:37:12.216179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68e900 (9): Bad file descriptor 00:17:49.936 [2024-07-15 18:37:12.226193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:50.871 18:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:50.871 18:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:50.871 18:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.871 18:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:50.871 18:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:50.871 18:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:50.871 18:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:50.871 [2024-07-15 18:37:13.251650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:50.871 [2024-07-15 18:37:13.251781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68e900 with addr=10.0.0.2, port=4420 00:17:50.871 [2024-07-15 18:37:13.251826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e900 is same with the state(5) to be set 00:17:50.871 [2024-07-15 18:37:13.251907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68e900 (9): Bad file descriptor 00:17:50.871 [2024-07-15 18:37:13.252945] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:50.871 [2024-07-15 18:37:13.253002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:50.871 [2024-07-15 18:37:13.253030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:50.871 [2024-07-15 18:37:13.253060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:50.871 [2024-07-15 18:37:13.253130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:50.871 [2024-07-15 18:37:13.253162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:50.871 18:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.871 18:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:50.871 18:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:51.805 [2024-07-15 18:37:14.251622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:51.805 [2024-07-15 18:37:14.251677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:51.805 [2024-07-15 18:37:14.251687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:51.805 [2024-07-15 18:37:14.251697] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:51.805 [2024-07-15 18:37:14.251717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:51.805 [2024-07-15 18:37:14.251743] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:51.805 [2024-07-15 18:37:14.251792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.805 [2024-07-15 18:37:14.251805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.805 [2024-07-15 18:37:14.251818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.805 [2024-07-15 18:37:14.251827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.805 [2024-07-15 18:37:14.251836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.805 [2024-07-15 18:37:14.251845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.805 [2024-07-15 18:37:14.251854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.805 [2024-07-15 18:37:14.251862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.805 [2024-07-15 18:37:14.251871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.805 [2024-07-15 18:37:14.251879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.805 [2024-07-15 18:37:14.251888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:51.805 [2024-07-15 18:37:14.251904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6313e0 (9): Bad file descriptor 00:17:51.805 [2024-07-15 18:37:14.252729] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:51.805 [2024-07-15 18:37:14.252747] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:51.805 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.064 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:52.064 18:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:53.001 18:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:53.001 18:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:53.001 18:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.001 18:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:53.001 18:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:53.001 18:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:53.001 18:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:53.001 18:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.001 18:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:53.001 18:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:53.936 [2024-07-15 18:37:16.260123] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:53.936 [2024-07-15 18:37:16.260156] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:53.936 [2024-07-15 18:37:16.260169] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:53.936 [2024-07-15 18:37:16.346090] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:53.936 [2024-07-15 18:37:16.401843] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:53.936 [2024-07-15 18:37:16.401892] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:53.936 [2024-07-15 18:37:16.401911] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:53.936 [2024-07-15 18:37:16.401926] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:53.936 [2024-07-15 18:37:16.401935] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:53.936 [2024-07-15 18:37:16.408594] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6aa300 was disconnected and freed. delete nvme_qpair. 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90116 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90116 ']' 00:17:53.936 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90116 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90116 00:17:54.194 killing process with pid 90116 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90116' 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90116 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90116 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.194 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:54.453 rmmod nvme_tcp 00:17:54.453 rmmod nvme_fabrics 00:17:54.453 rmmod nvme_keyring 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90066 ']' 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90066 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90066 ']' 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90066 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90066 00:17:54.453 killing process with pid 90066 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90066' 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90066 00:17:54.453 18:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90066 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:54.713 ************************************ 00:17:54.713 END TEST nvmf_discovery_remove_ifc 00:17:54.713 ************************************ 00:17:54.713 00:17:54.713 real 0m14.162s 00:17:54.713 user 0m24.527s 00:17:54.713 sys 0m2.337s 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.713 18:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:54.713 18:37:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:54.713 18:37:17 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:54.713 18:37:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:54.713 18:37:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.713 18:37:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:54.713 ************************************ 00:17:54.713 START TEST nvmf_identify_kernel_target 00:17:54.713 ************************************ 00:17:54.713 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:54.972 * Looking for test storage... 00:17:54.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:54.972 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:54.973 Cannot find device "nvmf_tgt_br" 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.973 Cannot find device "nvmf_tgt_br2" 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:54.973 Cannot find device "nvmf_tgt_br" 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:54.973 Cannot find device "nvmf_tgt_br2" 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:54.973 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:55.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:55.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:55.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:17:55.232 00:17:55.232 --- 10.0.0.2 ping statistics --- 00:17:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.232 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:55.232 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:55.232 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:55.232 00:17:55.232 --- 10.0.0.3 ping statistics --- 00:17:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.232 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:55.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:17:55.232 00:17:55.232 --- 10.0.0.1 ping statistics --- 00:17:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.232 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:55.232 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:55.491 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:55.491 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:55.491 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:55.491 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.491 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:55.492 18:37:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:56.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:56.059 Waiting for block devices as requested 00:17:56.059 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:56.059 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:56.319 No valid GPT data, bailing 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:56.319 No valid GPT data, bailing 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:56.319 No valid GPT data, bailing 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:56.319 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:56.578 No valid GPT data, bailing 00:17:56.578 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:56.578 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:56.578 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:56.578 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:56.578 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:56.578 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:56.578 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:56.578 18:37:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -a 10.0.0.1 -t tcp -s 4420 00:17:56.578 00:17:56.578 Discovery Log Number of Records 2, Generation counter 2 00:17:56.578 =====Discovery Log Entry 0====== 00:17:56.578 trtype: tcp 00:17:56.578 adrfam: ipv4 00:17:56.578 subtype: current discovery subsystem 00:17:56.578 treq: not specified, sq flow control disable supported 00:17:56.578 portid: 1 00:17:56.578 trsvcid: 4420 00:17:56.578 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:56.578 traddr: 10.0.0.1 00:17:56.578 eflags: none 00:17:56.578 sectype: none 00:17:56.578 =====Discovery Log Entry 1====== 00:17:56.578 trtype: tcp 00:17:56.578 adrfam: ipv4 00:17:56.578 subtype: nvme subsystem 00:17:56.578 treq: not specified, sq flow control disable supported 00:17:56.578 portid: 1 00:17:56.578 trsvcid: 4420 00:17:56.578 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:56.578 traddr: 10.0.0.1 00:17:56.578 eflags: none 00:17:56.578 sectype: none 00:17:56.578 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:56.578 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:56.838 ===================================================== 00:17:56.838 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:56.838 ===================================================== 00:17:56.838 Controller Capabilities/Features 00:17:56.838 ================================ 00:17:56.838 Vendor ID: 0000 00:17:56.838 Subsystem Vendor ID: 0000 00:17:56.838 Serial Number: 997b786974480d38c065 00:17:56.838 Model Number: Linux 00:17:56.838 Firmware Version: 6.7.0-68 00:17:56.838 Recommended Arb Burst: 0 00:17:56.838 IEEE OUI Identifier: 00 00 00 00:17:56.838 Multi-path I/O 00:17:56.838 May have multiple subsystem ports: No 00:17:56.838 May have multiple controllers: No 00:17:56.838 Associated with SR-IOV VF: No 00:17:56.838 Max Data Transfer Size: Unlimited 00:17:56.838 Max Number of Namespaces: 0 00:17:56.838 Max Number of I/O Queues: 1024 00:17:56.838 NVMe Specification Version (VS): 1.3 00:17:56.838 NVMe Specification Version (Identify): 1.3 00:17:56.838 Maximum Queue Entries: 1024 00:17:56.838 Contiguous Queues Required: No 00:17:56.838 Arbitration Mechanisms Supported 00:17:56.838 Weighted Round Robin: Not Supported 00:17:56.838 Vendor Specific: Not Supported 00:17:56.838 Reset Timeout: 7500 ms 00:17:56.838 Doorbell Stride: 4 bytes 00:17:56.838 NVM Subsystem Reset: Not Supported 00:17:56.838 Command Sets Supported 00:17:56.838 NVM Command Set: Supported 00:17:56.838 Boot Partition: Not Supported 00:17:56.838 Memory Page Size Minimum: 4096 bytes 00:17:56.838 Memory Page Size Maximum: 4096 bytes 00:17:56.838 Persistent Memory Region: Not Supported 00:17:56.838 Optional Asynchronous Events Supported 00:17:56.838 Namespace Attribute Notices: Not Supported 00:17:56.838 Firmware Activation Notices: Not Supported 00:17:56.838 ANA Change Notices: Not Supported 00:17:56.838 PLE Aggregate Log Change Notices: Not Supported 00:17:56.838 LBA Status Info Alert Notices: Not Supported 00:17:56.838 EGE Aggregate Log Change Notices: Not Supported 00:17:56.838 Normal NVM Subsystem Shutdown event: Not Supported 00:17:56.838 Zone Descriptor Change Notices: Not Supported 00:17:56.838 Discovery Log Change Notices: Supported 00:17:56.838 Controller Attributes 00:17:56.838 128-bit Host Identifier: Not Supported 00:17:56.838 Non-Operational Permissive Mode: Not Supported 00:17:56.838 NVM Sets: Not Supported 00:17:56.838 Read Recovery Levels: Not Supported 00:17:56.838 Endurance Groups: Not Supported 00:17:56.838 Predictable Latency Mode: Not Supported 00:17:56.838 Traffic Based Keep ALive: Not Supported 00:17:56.838 Namespace Granularity: Not Supported 00:17:56.838 SQ Associations: Not Supported 00:17:56.838 UUID List: Not Supported 00:17:56.838 Multi-Domain Subsystem: Not Supported 00:17:56.838 Fixed Capacity Management: Not Supported 00:17:56.838 Variable Capacity Management: Not Supported 00:17:56.838 Delete Endurance Group: Not Supported 00:17:56.838 Delete NVM Set: Not Supported 00:17:56.838 Extended LBA Formats Supported: Not Supported 00:17:56.838 Flexible Data Placement Supported: Not Supported 00:17:56.838 00:17:56.838 Controller Memory Buffer Support 00:17:56.838 ================================ 00:17:56.838 Supported: No 00:17:56.838 00:17:56.838 Persistent Memory Region Support 00:17:56.838 ================================ 00:17:56.838 Supported: No 00:17:56.838 00:17:56.838 Admin Command Set Attributes 00:17:56.838 ============================ 00:17:56.838 Security Send/Receive: Not Supported 00:17:56.838 Format NVM: Not Supported 00:17:56.838 Firmware Activate/Download: Not Supported 00:17:56.838 Namespace Management: Not Supported 00:17:56.838 Device Self-Test: Not Supported 00:17:56.838 Directives: Not Supported 00:17:56.838 NVMe-MI: Not Supported 00:17:56.838 Virtualization Management: Not Supported 00:17:56.838 Doorbell Buffer Config: Not Supported 00:17:56.838 Get LBA Status Capability: Not Supported 00:17:56.838 Command & Feature Lockdown Capability: Not Supported 00:17:56.838 Abort Command Limit: 1 00:17:56.838 Async Event Request Limit: 1 00:17:56.838 Number of Firmware Slots: N/A 00:17:56.838 Firmware Slot 1 Read-Only: N/A 00:17:56.838 Firmware Activation Without Reset: N/A 00:17:56.838 Multiple Update Detection Support: N/A 00:17:56.838 Firmware Update Granularity: No Information Provided 00:17:56.838 Per-Namespace SMART Log: No 00:17:56.838 Asymmetric Namespace Access Log Page: Not Supported 00:17:56.838 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:56.838 Command Effects Log Page: Not Supported 00:17:56.838 Get Log Page Extended Data: Supported 00:17:56.838 Telemetry Log Pages: Not Supported 00:17:56.838 Persistent Event Log Pages: Not Supported 00:17:56.838 Supported Log Pages Log Page: May Support 00:17:56.838 Commands Supported & Effects Log Page: Not Supported 00:17:56.838 Feature Identifiers & Effects Log Page:May Support 00:17:56.838 NVMe-MI Commands & Effects Log Page: May Support 00:17:56.838 Data Area 4 for Telemetry Log: Not Supported 00:17:56.838 Error Log Page Entries Supported: 1 00:17:56.838 Keep Alive: Not Supported 00:17:56.838 00:17:56.838 NVM Command Set Attributes 00:17:56.838 ========================== 00:17:56.838 Submission Queue Entry Size 00:17:56.838 Max: 1 00:17:56.838 Min: 1 00:17:56.838 Completion Queue Entry Size 00:17:56.838 Max: 1 00:17:56.838 Min: 1 00:17:56.838 Number of Namespaces: 0 00:17:56.838 Compare Command: Not Supported 00:17:56.838 Write Uncorrectable Command: Not Supported 00:17:56.839 Dataset Management Command: Not Supported 00:17:56.839 Write Zeroes Command: Not Supported 00:17:56.839 Set Features Save Field: Not Supported 00:17:56.839 Reservations: Not Supported 00:17:56.839 Timestamp: Not Supported 00:17:56.839 Copy: Not Supported 00:17:56.839 Volatile Write Cache: Not Present 00:17:56.839 Atomic Write Unit (Normal): 1 00:17:56.839 Atomic Write Unit (PFail): 1 00:17:56.839 Atomic Compare & Write Unit: 1 00:17:56.839 Fused Compare & Write: Not Supported 00:17:56.839 Scatter-Gather List 00:17:56.839 SGL Command Set: Supported 00:17:56.839 SGL Keyed: Not Supported 00:17:56.839 SGL Bit Bucket Descriptor: Not Supported 00:17:56.839 SGL Metadata Pointer: Not Supported 00:17:56.839 Oversized SGL: Not Supported 00:17:56.839 SGL Metadata Address: Not Supported 00:17:56.839 SGL Offset: Supported 00:17:56.839 Transport SGL Data Block: Not Supported 00:17:56.839 Replay Protected Memory Block: Not Supported 00:17:56.839 00:17:56.839 Firmware Slot Information 00:17:56.839 ========================= 00:17:56.839 Active slot: 0 00:17:56.839 00:17:56.839 00:17:56.839 Error Log 00:17:56.839 ========= 00:17:56.839 00:17:56.839 Active Namespaces 00:17:56.839 ================= 00:17:56.839 Discovery Log Page 00:17:56.839 ================== 00:17:56.839 Generation Counter: 2 00:17:56.839 Number of Records: 2 00:17:56.839 Record Format: 0 00:17:56.839 00:17:56.839 Discovery Log Entry 0 00:17:56.839 ---------------------- 00:17:56.839 Transport Type: 3 (TCP) 00:17:56.839 Address Family: 1 (IPv4) 00:17:56.839 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:56.839 Entry Flags: 00:17:56.839 Duplicate Returned Information: 0 00:17:56.839 Explicit Persistent Connection Support for Discovery: 0 00:17:56.839 Transport Requirements: 00:17:56.839 Secure Channel: Not Specified 00:17:56.839 Port ID: 1 (0x0001) 00:17:56.839 Controller ID: 65535 (0xffff) 00:17:56.839 Admin Max SQ Size: 32 00:17:56.839 Transport Service Identifier: 4420 00:17:56.839 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:56.839 Transport Address: 10.0.0.1 00:17:56.839 Discovery Log Entry 1 00:17:56.839 ---------------------- 00:17:56.839 Transport Type: 3 (TCP) 00:17:56.839 Address Family: 1 (IPv4) 00:17:56.839 Subsystem Type: 2 (NVM Subsystem) 00:17:56.839 Entry Flags: 00:17:56.839 Duplicate Returned Information: 0 00:17:56.839 Explicit Persistent Connection Support for Discovery: 0 00:17:56.839 Transport Requirements: 00:17:56.839 Secure Channel: Not Specified 00:17:56.839 Port ID: 1 (0x0001) 00:17:56.839 Controller ID: 65535 (0xffff) 00:17:56.839 Admin Max SQ Size: 32 00:17:56.839 Transport Service Identifier: 4420 00:17:56.839 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:56.839 Transport Address: 10.0.0.1 00:17:56.839 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:56.839 get_feature(0x01) failed 00:17:56.839 get_feature(0x02) failed 00:17:56.839 get_feature(0x04) failed 00:17:56.839 ===================================================== 00:17:56.839 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:56.839 ===================================================== 00:17:56.839 Controller Capabilities/Features 00:17:56.839 ================================ 00:17:56.839 Vendor ID: 0000 00:17:56.839 Subsystem Vendor ID: 0000 00:17:56.839 Serial Number: 3510edc7eeeaa5e8370c 00:17:56.839 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:56.839 Firmware Version: 6.7.0-68 00:17:56.839 Recommended Arb Burst: 6 00:17:56.839 IEEE OUI Identifier: 00 00 00 00:17:56.839 Multi-path I/O 00:17:56.839 May have multiple subsystem ports: Yes 00:17:56.839 May have multiple controllers: Yes 00:17:56.839 Associated with SR-IOV VF: No 00:17:56.839 Max Data Transfer Size: Unlimited 00:17:56.839 Max Number of Namespaces: 1024 00:17:56.839 Max Number of I/O Queues: 128 00:17:56.839 NVMe Specification Version (VS): 1.3 00:17:56.839 NVMe Specification Version (Identify): 1.3 00:17:56.839 Maximum Queue Entries: 1024 00:17:56.839 Contiguous Queues Required: No 00:17:56.839 Arbitration Mechanisms Supported 00:17:56.839 Weighted Round Robin: Not Supported 00:17:56.839 Vendor Specific: Not Supported 00:17:56.839 Reset Timeout: 7500 ms 00:17:56.839 Doorbell Stride: 4 bytes 00:17:56.839 NVM Subsystem Reset: Not Supported 00:17:56.839 Command Sets Supported 00:17:56.839 NVM Command Set: Supported 00:17:56.839 Boot Partition: Not Supported 00:17:56.839 Memory Page Size Minimum: 4096 bytes 00:17:56.839 Memory Page Size Maximum: 4096 bytes 00:17:56.839 Persistent Memory Region: Not Supported 00:17:56.839 Optional Asynchronous Events Supported 00:17:56.839 Namespace Attribute Notices: Supported 00:17:56.839 Firmware Activation Notices: Not Supported 00:17:56.839 ANA Change Notices: Supported 00:17:56.839 PLE Aggregate Log Change Notices: Not Supported 00:17:56.839 LBA Status Info Alert Notices: Not Supported 00:17:56.839 EGE Aggregate Log Change Notices: Not Supported 00:17:56.839 Normal NVM Subsystem Shutdown event: Not Supported 00:17:56.839 Zone Descriptor Change Notices: Not Supported 00:17:56.839 Discovery Log Change Notices: Not Supported 00:17:56.839 Controller Attributes 00:17:56.839 128-bit Host Identifier: Supported 00:17:56.839 Non-Operational Permissive Mode: Not Supported 00:17:56.839 NVM Sets: Not Supported 00:17:56.839 Read Recovery Levels: Not Supported 00:17:56.839 Endurance Groups: Not Supported 00:17:56.839 Predictable Latency Mode: Not Supported 00:17:56.839 Traffic Based Keep ALive: Supported 00:17:56.839 Namespace Granularity: Not Supported 00:17:56.839 SQ Associations: Not Supported 00:17:56.839 UUID List: Not Supported 00:17:56.839 Multi-Domain Subsystem: Not Supported 00:17:56.839 Fixed Capacity Management: Not Supported 00:17:56.839 Variable Capacity Management: Not Supported 00:17:56.839 Delete Endurance Group: Not Supported 00:17:56.839 Delete NVM Set: Not Supported 00:17:56.839 Extended LBA Formats Supported: Not Supported 00:17:56.839 Flexible Data Placement Supported: Not Supported 00:17:56.839 00:17:56.839 Controller Memory Buffer Support 00:17:56.839 ================================ 00:17:56.839 Supported: No 00:17:56.839 00:17:56.839 Persistent Memory Region Support 00:17:56.839 ================================ 00:17:56.839 Supported: No 00:17:56.839 00:17:56.839 Admin Command Set Attributes 00:17:56.839 ============================ 00:17:56.839 Security Send/Receive: Not Supported 00:17:56.839 Format NVM: Not Supported 00:17:56.839 Firmware Activate/Download: Not Supported 00:17:56.839 Namespace Management: Not Supported 00:17:56.839 Device Self-Test: Not Supported 00:17:56.839 Directives: Not Supported 00:17:56.839 NVMe-MI: Not Supported 00:17:56.839 Virtualization Management: Not Supported 00:17:56.839 Doorbell Buffer Config: Not Supported 00:17:56.839 Get LBA Status Capability: Not Supported 00:17:56.839 Command & Feature Lockdown Capability: Not Supported 00:17:56.839 Abort Command Limit: 4 00:17:56.839 Async Event Request Limit: 4 00:17:56.839 Number of Firmware Slots: N/A 00:17:56.839 Firmware Slot 1 Read-Only: N/A 00:17:56.839 Firmware Activation Without Reset: N/A 00:17:56.839 Multiple Update Detection Support: N/A 00:17:56.839 Firmware Update Granularity: No Information Provided 00:17:56.839 Per-Namespace SMART Log: Yes 00:17:56.839 Asymmetric Namespace Access Log Page: Supported 00:17:56.839 ANA Transition Time : 10 sec 00:17:56.839 00:17:56.839 Asymmetric Namespace Access Capabilities 00:17:56.839 ANA Optimized State : Supported 00:17:56.839 ANA Non-Optimized State : Supported 00:17:56.839 ANA Inaccessible State : Supported 00:17:56.839 ANA Persistent Loss State : Supported 00:17:56.839 ANA Change State : Supported 00:17:56.839 ANAGRPID is not changed : No 00:17:56.839 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:56.839 00:17:56.839 ANA Group Identifier Maximum : 128 00:17:56.839 Number of ANA Group Identifiers : 128 00:17:56.839 Max Number of Allowed Namespaces : 1024 00:17:56.839 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:56.839 Command Effects Log Page: Supported 00:17:56.839 Get Log Page Extended Data: Supported 00:17:56.839 Telemetry Log Pages: Not Supported 00:17:56.839 Persistent Event Log Pages: Not Supported 00:17:56.839 Supported Log Pages Log Page: May Support 00:17:56.839 Commands Supported & Effects Log Page: Not Supported 00:17:56.839 Feature Identifiers & Effects Log Page:May Support 00:17:56.839 NVMe-MI Commands & Effects Log Page: May Support 00:17:56.839 Data Area 4 for Telemetry Log: Not Supported 00:17:56.839 Error Log Page Entries Supported: 128 00:17:56.839 Keep Alive: Supported 00:17:56.839 Keep Alive Granularity: 1000 ms 00:17:56.839 00:17:56.839 NVM Command Set Attributes 00:17:56.839 ========================== 00:17:56.839 Submission Queue Entry Size 00:17:56.839 Max: 64 00:17:56.839 Min: 64 00:17:56.839 Completion Queue Entry Size 00:17:56.839 Max: 16 00:17:56.839 Min: 16 00:17:56.839 Number of Namespaces: 1024 00:17:56.839 Compare Command: Not Supported 00:17:56.839 Write Uncorrectable Command: Not Supported 00:17:56.839 Dataset Management Command: Supported 00:17:56.840 Write Zeroes Command: Supported 00:17:56.840 Set Features Save Field: Not Supported 00:17:56.840 Reservations: Not Supported 00:17:56.840 Timestamp: Not Supported 00:17:56.840 Copy: Not Supported 00:17:56.840 Volatile Write Cache: Present 00:17:56.840 Atomic Write Unit (Normal): 1 00:17:56.840 Atomic Write Unit (PFail): 1 00:17:56.840 Atomic Compare & Write Unit: 1 00:17:56.840 Fused Compare & Write: Not Supported 00:17:56.840 Scatter-Gather List 00:17:56.840 SGL Command Set: Supported 00:17:56.840 SGL Keyed: Not Supported 00:17:56.840 SGL Bit Bucket Descriptor: Not Supported 00:17:56.840 SGL Metadata Pointer: Not Supported 00:17:56.840 Oversized SGL: Not Supported 00:17:56.840 SGL Metadata Address: Not Supported 00:17:56.840 SGL Offset: Supported 00:17:56.840 Transport SGL Data Block: Not Supported 00:17:56.840 Replay Protected Memory Block: Not Supported 00:17:56.840 00:17:56.840 Firmware Slot Information 00:17:56.840 ========================= 00:17:56.840 Active slot: 0 00:17:56.840 00:17:56.840 Asymmetric Namespace Access 00:17:56.840 =========================== 00:17:56.840 Change Count : 0 00:17:56.840 Number of ANA Group Descriptors : 1 00:17:56.840 ANA Group Descriptor : 0 00:17:56.840 ANA Group ID : 1 00:17:56.840 Number of NSID Values : 1 00:17:56.840 Change Count : 0 00:17:56.840 ANA State : 1 00:17:56.840 Namespace Identifier : 1 00:17:56.840 00:17:56.840 Commands Supported and Effects 00:17:56.840 ============================== 00:17:56.840 Admin Commands 00:17:56.840 -------------- 00:17:56.840 Get Log Page (02h): Supported 00:17:56.840 Identify (06h): Supported 00:17:56.840 Abort (08h): Supported 00:17:56.840 Set Features (09h): Supported 00:17:56.840 Get Features (0Ah): Supported 00:17:56.840 Asynchronous Event Request (0Ch): Supported 00:17:56.840 Keep Alive (18h): Supported 00:17:56.840 I/O Commands 00:17:56.840 ------------ 00:17:56.840 Flush (00h): Supported 00:17:56.840 Write (01h): Supported LBA-Change 00:17:56.840 Read (02h): Supported 00:17:56.840 Write Zeroes (08h): Supported LBA-Change 00:17:56.840 Dataset Management (09h): Supported 00:17:56.840 00:17:56.840 Error Log 00:17:56.840 ========= 00:17:56.840 Entry: 0 00:17:56.840 Error Count: 0x3 00:17:56.840 Submission Queue Id: 0x0 00:17:56.840 Command Id: 0x5 00:17:56.840 Phase Bit: 0 00:17:56.840 Status Code: 0x2 00:17:56.840 Status Code Type: 0x0 00:17:56.840 Do Not Retry: 1 00:17:56.840 Error Location: 0x28 00:17:56.840 LBA: 0x0 00:17:56.840 Namespace: 0x0 00:17:56.840 Vendor Log Page: 0x0 00:17:56.840 ----------- 00:17:56.840 Entry: 1 00:17:56.840 Error Count: 0x2 00:17:56.840 Submission Queue Id: 0x0 00:17:56.840 Command Id: 0x5 00:17:56.840 Phase Bit: 0 00:17:56.840 Status Code: 0x2 00:17:56.840 Status Code Type: 0x0 00:17:56.840 Do Not Retry: 1 00:17:56.840 Error Location: 0x28 00:17:56.840 LBA: 0x0 00:17:56.840 Namespace: 0x0 00:17:56.840 Vendor Log Page: 0x0 00:17:56.840 ----------- 00:17:56.840 Entry: 2 00:17:56.840 Error Count: 0x1 00:17:56.840 Submission Queue Id: 0x0 00:17:56.840 Command Id: 0x4 00:17:56.840 Phase Bit: 0 00:17:56.840 Status Code: 0x2 00:17:56.840 Status Code Type: 0x0 00:17:56.840 Do Not Retry: 1 00:17:56.840 Error Location: 0x28 00:17:56.840 LBA: 0x0 00:17:56.840 Namespace: 0x0 00:17:56.840 Vendor Log Page: 0x0 00:17:56.840 00:17:56.840 Number of Queues 00:17:56.840 ================ 00:17:56.840 Number of I/O Submission Queues: 128 00:17:56.840 Number of I/O Completion Queues: 128 00:17:56.840 00:17:56.840 ZNS Specific Controller Data 00:17:56.840 ============================ 00:17:56.840 Zone Append Size Limit: 0 00:17:56.840 00:17:56.840 00:17:56.840 Active Namespaces 00:17:56.840 ================= 00:17:56.840 get_feature(0x05) failed 00:17:56.840 Namespace ID:1 00:17:56.840 Command Set Identifier: NVM (00h) 00:17:56.840 Deallocate: Supported 00:17:56.840 Deallocated/Unwritten Error: Not Supported 00:17:56.840 Deallocated Read Value: Unknown 00:17:56.840 Deallocate in Write Zeroes: Not Supported 00:17:56.840 Deallocated Guard Field: 0xFFFF 00:17:56.840 Flush: Supported 00:17:56.840 Reservation: Not Supported 00:17:56.840 Namespace Sharing Capabilities: Multiple Controllers 00:17:56.840 Size (in LBAs): 1310720 (5GiB) 00:17:56.840 Capacity (in LBAs): 1310720 (5GiB) 00:17:56.840 Utilization (in LBAs): 1310720 (5GiB) 00:17:56.840 UUID: e12e19d2-342a-4bd7-ab2d-ffce361386fb 00:17:56.840 Thin Provisioning: Not Supported 00:17:56.840 Per-NS Atomic Units: Yes 00:17:56.840 Atomic Boundary Size (Normal): 0 00:17:56.840 Atomic Boundary Size (PFail): 0 00:17:56.840 Atomic Boundary Offset: 0 00:17:56.840 NGUID/EUI64 Never Reused: No 00:17:56.840 ANA group ID: 1 00:17:56.840 Namespace Write Protected: No 00:17:56.840 Number of LBA Formats: 1 00:17:56.840 Current LBA Format: LBA Format #00 00:17:56.840 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:56.840 00:17:56.840 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:56.840 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:57.099 rmmod nvme_tcp 00:17:57.099 rmmod nvme_fabrics 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:57.099 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:57.100 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:57.100 18:37:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:58.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:58.035 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:58.035 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:58.294 00:17:58.294 real 0m3.488s 00:17:58.294 user 0m1.186s 00:17:58.294 sys 0m1.820s 00:17:58.294 18:37:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:58.294 18:37:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.294 ************************************ 00:17:58.294 END TEST nvmf_identify_kernel_target 00:17:58.294 ************************************ 00:17:58.294 18:37:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:58.295 18:37:20 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:58.295 18:37:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:58.295 18:37:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.295 18:37:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.295 ************************************ 00:17:58.295 START TEST nvmf_auth_host 00:17:58.295 ************************************ 00:17:58.295 18:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:58.590 * Looking for test storage... 00:17:58.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:58.590 18:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:58.591 18:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:58.591 Cannot find device "nvmf_tgt_br" 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.591 Cannot find device "nvmf_tgt_br2" 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:58.591 Cannot find device "nvmf_tgt_br" 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:58.591 Cannot find device "nvmf_tgt_br2" 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.591 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:58.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:17:58.849 00:17:58.849 --- 10.0.0.2 ping statistics --- 00:17:58.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.849 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:58.849 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.849 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:17:58.849 00:17:58.849 --- 10.0.0.3 ping statistics --- 00:17:58.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.849 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:17:58.849 00:17:58.849 --- 10.0.0.1 ping statistics --- 00:17:58.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.849 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91016 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91016 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91016 ']' 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.849 18:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:59.783 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ab78cebdb66b6f16e9f30ae6e0ca4f2 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eYH 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ab78cebdb66b6f16e9f30ae6e0ca4f2 0 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ab78cebdb66b6f16e9f30ae6e0ca4f2 0 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ab78cebdb66b6f16e9f30ae6e0ca4f2 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eYH 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eYH 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eYH 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=986821869409c70fa51a2154816a98319e12f76f61b592b8ef14adc63e7ddbf1 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3Mz 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 986821869409c70fa51a2154816a98319e12f76f61b592b8ef14adc63e7ddbf1 3 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 986821869409c70fa51a2154816a98319e12f76f61b592b8ef14adc63e7ddbf1 3 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=986821869409c70fa51a2154816a98319e12f76f61b592b8ef14adc63e7ddbf1 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3Mz 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3Mz 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3Mz 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ca9e262eb51f67d6a44d9a770c56d464222958332603e94b 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.tHJ 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ca9e262eb51f67d6a44d9a770c56d464222958332603e94b 0 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ca9e262eb51f67d6a44d9a770c56d464222958332603e94b 0 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ca9e262eb51f67d6a44d9a770c56d464222958332603e94b 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.tHJ 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.tHJ 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.tHJ 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=89848e1a258a49b96b280428d1723f797f027ee0e121e5fe 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VQ7 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 89848e1a258a49b96b280428d1723f797f027ee0e121e5fe 2 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 89848e1a258a49b96b280428d1723f797f027ee0e121e5fe 2 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=89848e1a258a49b96b280428d1723f797f027ee0e121e5fe 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:00.042 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VQ7 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VQ7 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.VQ7 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=664caea7fc4f772cfb6e573564829a8e 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Z0P 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 664caea7fc4f772cfb6e573564829a8e 1 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 664caea7fc4f772cfb6e573564829a8e 1 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=664caea7fc4f772cfb6e573564829a8e 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Z0P 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Z0P 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Z0P 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ca73a4927e740c52ea3274f39effe324 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iVD 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ca73a4927e740c52ea3274f39effe324 1 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ca73a4927e740c52ea3274f39effe324 1 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ca73a4927e740c52ea3274f39effe324 00:18:00.300 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iVD 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iVD 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.iVD 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=262a0db410ba3fdb7d38d6f8c164819d95d9ed6cddeb5649 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7Ml 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 262a0db410ba3fdb7d38d6f8c164819d95d9ed6cddeb5649 2 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 262a0db410ba3fdb7d38d6f8c164819d95d9ed6cddeb5649 2 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=262a0db410ba3fdb7d38d6f8c164819d95d9ed6cddeb5649 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7Ml 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7Ml 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.7Ml 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8d5d646382bd04222d787a305c68349f 00:18:00.301 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6op 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8d5d646382bd04222d787a305c68349f 0 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8d5d646382bd04222d787a305c68349f 0 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8d5d646382bd04222d787a305c68349f 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6op 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6op 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6op 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ef1ee3df6f849ee3dd5aa4594f8ecd4edef5a52bda61aaec744e54602408bf02 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kUt 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ef1ee3df6f849ee3dd5aa4594f8ecd4edef5a52bda61aaec744e54602408bf02 3 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ef1ee3df6f849ee3dd5aa4594f8ecd4edef5a52bda61aaec744e54602408bf02 3 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ef1ee3df6f849ee3dd5aa4594f8ecd4edef5a52bda61aaec744e54602408bf02 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:00.559 18:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kUt 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kUt 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.kUt 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91016 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91016 ']' 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.559 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eYH 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3Mz ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3Mz 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.tHJ 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.VQ7 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VQ7 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Z0P 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.iVD ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iVD 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.7Ml 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6op ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6op 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.kUt 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:00.817 18:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:01.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:01.389 Waiting for block devices as requested 00:18:01.389 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:01.647 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:02.581 No valid GPT data, bailing 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:02.581 18:37:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:02.581 No valid GPT data, bailing 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:02.581 No valid GPT data, bailing 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:02.581 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:02.582 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:02.582 No valid GPT data, bailing 00:18:02.582 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -a 10.0.0.1 -t tcp -s 4420 00:18:02.841 00:18:02.841 Discovery Log Number of Records 2, Generation counter 2 00:18:02.841 =====Discovery Log Entry 0====== 00:18:02.841 trtype: tcp 00:18:02.841 adrfam: ipv4 00:18:02.841 subtype: current discovery subsystem 00:18:02.841 treq: not specified, sq flow control disable supported 00:18:02.841 portid: 1 00:18:02.841 trsvcid: 4420 00:18:02.841 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:02.841 traddr: 10.0.0.1 00:18:02.841 eflags: none 00:18:02.841 sectype: none 00:18:02.841 =====Discovery Log Entry 1====== 00:18:02.841 trtype: tcp 00:18:02.841 adrfam: ipv4 00:18:02.841 subtype: nvme subsystem 00:18:02.841 treq: not specified, sq flow control disable supported 00:18:02.841 portid: 1 00:18:02.841 trsvcid: 4420 00:18:02.841 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:02.841 traddr: 10.0.0.1 00:18:02.841 eflags: none 00:18:02.841 sectype: none 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.841 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.100 nvme0n1 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.100 nvme0n1 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.100 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.358 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.359 nvme0n1 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.359 18:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.617 nvme0n1 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.617 nvme0n1 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.617 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.876 nvme0n1 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:03.876 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.135 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.394 nvme0n1 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.394 nvme0n1 00:18:04.394 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.395 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.395 18:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.395 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.395 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.395 18:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.653 nvme0n1 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.653 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.654 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.654 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.654 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.654 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.654 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.654 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:04.654 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.654 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.913 nvme0n1 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.913 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.914 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.914 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.914 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.914 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.914 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.914 18:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.914 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:04.914 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.914 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.172 nvme0n1 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:05.172 18:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.741 nvme0n1 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.741 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.000 nvme0n1 00:18:06.000 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.000 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.000 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.000 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.000 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.000 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.000 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.000 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.000 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.001 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.260 nvme0n1 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.260 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.520 nvme0n1 00:18:06.520 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.520 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.520 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.520 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.520 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.520 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.520 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.520 18:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.520 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.520 18:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.520 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.779 nvme0n1 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:06.779 18:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.157 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.415 nvme0n1 00:18:08.415 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.415 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.415 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:08.416 18:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.416 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.416 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:08.416 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.416 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:08.416 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:08.416 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:08.416 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.416 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.416 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.983 nvme0n1 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.984 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.243 nvme0n1 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.243 18:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.502 nvme0n1 00:18:09.502 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.502 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.502 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.502 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.502 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.502 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.502 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.502 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.502 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.502 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:09.760 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.761 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.020 nvme0n1 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.020 18:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.588 nvme0n1 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.588 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 nvme0n1 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.172 18:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.737 nvme0n1 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.737 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.738 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.305 nvme0n1 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:12.305 18:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:12.564 18:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:12.564 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.564 18:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.822 nvme0n1 00:18:12.822 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.822 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.822 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.822 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.822 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.822 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 nvme0n1 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.081 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.340 nvme0n1 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.340 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.599 nvme0n1 00:18:13.599 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.599 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.599 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.599 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.599 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.599 18:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.599 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.599 18:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.599 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.599 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.599 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.599 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.599 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:13.599 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.599 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 nvme0n1 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.600 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.858 nvme0n1 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:13.858 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.859 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.117 nvme0n1 00:18:14.117 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.117 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.117 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.117 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.117 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.117 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.117 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.117 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.117 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.118 nvme0n1 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.118 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.378 nvme0n1 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.378 18:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.638 nvme0n1 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.638 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.639 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.898 nvme0n1 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.898 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 nvme0n1 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 nvme0n1 00:18:15.156 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.415 18:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.415 nvme0n1 00:18:15.415 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.415 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.415 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.415 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.415 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:15.674 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.675 nvme0n1 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.675 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 nvme0n1 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.193 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.452 nvme0n1 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:16.452 18:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:16.453 18:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.453 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.453 18:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.710 nvme0n1 00:18:16.710 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.710 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.710 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.710 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.710 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.710 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.710 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.710 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.711 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.968 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.225 nvme0n1 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.225 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.543 nvme0n1 00:18:17.543 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.543 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.543 18:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.543 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.543 18:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.543 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.801 nvme0n1 00:18:17.801 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.801 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.801 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.801 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.801 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.060 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.627 nvme0n1 00:18:18.627 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.627 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:18.627 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.627 18:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.627 18:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.627 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.194 nvme0n1 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.194 18:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.760 nvme0n1 00:18:19.760 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.760 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.760 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.761 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.329 nvme0n1 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.329 18:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.896 nvme0n1 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.896 nvme0n1 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.896 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.156 nvme0n1 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.156 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.416 nvme0n1 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.416 nvme0n1 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.416 18:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.416 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.676 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.677 nvme0n1 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.677 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.937 nvme0n1 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.937 nvme0n1 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.937 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.196 nvme0n1 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:22.196 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:22.197 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.197 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.197 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:22.197 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.197 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:22.197 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:22.197 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:22.197 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:22.197 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.197 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.455 nvme0n1 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.455 18:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.714 nvme0n1 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.714 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.973 nvme0n1 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.973 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.232 nvme0n1 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.232 nvme0n1 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.232 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:23.489 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.490 18:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.490 nvme0n1 00:18:23.490 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.490 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.490 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.490 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.490 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.490 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.748 nvme0n1 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.748 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:24.007 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.008 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.267 nvme0n1 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.267 18:37:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.526 nvme0n1 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.526 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.783 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.041 nvme0n1 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.041 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.299 nvme0n1 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.299 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.300 18:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.866 nvme0n1 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiNzhjZWJkYjY2YjZmMTZlOWYzMGFlNmUwY2E0ZjL3gp6z: 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: ]] 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg2ODIxODY5NDA5YzcwZmE1MWEyMTU0ODE2YTk4MzE5ZTEyZjc2ZjYxYjU5MmI4ZWYxNGFkYzYzZTdkZGJmMemFNSU=: 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.866 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.435 nvme0n1 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.436 18:37:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.034 nvme0n1 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY0Y2FlYTdmYzRmNzcyY2ZiNmU1NzM1NjQ4MjlhOGXh+K23: 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: ]] 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2E3M2E0OTI3ZTc0MGM1MmVhMzI3NGYzOWVmZmUzMjTABfjG: 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.034 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.601 nvme0n1 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjYyYTBkYjQxMGJhM2ZkYjdkMzhkNmY4YzE2NDgxOWQ5NWQ5ZWQ2Y2RkZWI1NjQ5aUautg==: 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: ]] 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGQ1ZDY0NjM4MmJkMDQyMjJkNzg3YTMwNWM2ODM0OWZecr0a: 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.601 18:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.601 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.168 nvme0n1 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxZWUzZGY2Zjg0OWVlM2RkNWFhNDU5NGY4ZWNkNGVkZWY1YTUyYmRhNjFhYWVjNzQ0ZTU0NjAyNDA4YmYwMi7Oo3U=: 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.168 18:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.735 nvme0n1 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E5ZTI2MmViNTFmNjdkNmE0NGQ5YTc3MGM1NmQ0NjQyMjI5NTgzMzI2MDNlOTRiVuBtIg==: 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: ]] 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODk4NDhlMWEyNThhNDliOTZiMjgwNDI4ZDE3MjNmNzk3ZjAyN2VlMGUxMjFlNWZl693mWQ==: 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.735 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.735 2024/07/15 18:37:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:28.735 request: 00:18:28.736 { 00:18:28.736 "method": "bdev_nvme_attach_controller", 00:18:28.736 "params": { 00:18:28.736 "name": "nvme0", 00:18:28.736 "trtype": "tcp", 00:18:28.736 "traddr": "10.0.0.1", 00:18:28.736 "adrfam": "ipv4", 00:18:28.736 "trsvcid": "4420", 00:18:28.736 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:28.736 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:28.736 "prchk_reftag": false, 00:18:28.736 "prchk_guard": false, 00:18:28.736 "hdgst": false, 00:18:28.736 "ddgst": false 00:18:28.736 } 00:18:28.736 } 00:18:28.736 Got JSON-RPC error response 00:18:28.736 GoRPCClient: error on JSON-RPC call 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.736 2024/07/15 18:37:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:28.736 request: 00:18:28.736 { 00:18:28.736 "method": "bdev_nvme_attach_controller", 00:18:28.736 "params": { 00:18:28.736 "name": "nvme0", 00:18:28.736 "trtype": "tcp", 00:18:28.736 "traddr": "10.0.0.1", 00:18:28.736 "adrfam": "ipv4", 00:18:28.736 "trsvcid": "4420", 00:18:28.736 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:28.736 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:28.736 "prchk_reftag": false, 00:18:28.736 "prchk_guard": false, 00:18:28.736 "hdgst": false, 00:18:28.736 "ddgst": false, 00:18:28.736 "dhchap_key": "key2" 00:18:28.736 } 00:18:28.736 } 00:18:28.736 Got JSON-RPC error response 00:18:28.736 GoRPCClient: error on JSON-RPC call 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.736 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.995 2024/07/15 18:37:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:28.995 request: 00:18:28.995 { 00:18:28.995 "method": "bdev_nvme_attach_controller", 00:18:28.995 "params": { 00:18:28.995 "name": "nvme0", 00:18:28.995 "trtype": "tcp", 00:18:28.995 "traddr": "10.0.0.1", 00:18:28.995 "adrfam": "ipv4", 00:18:28.995 "trsvcid": "4420", 00:18:28.995 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:28.995 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:28.995 "prchk_reftag": false, 00:18:28.995 "prchk_guard": false, 00:18:28.995 "hdgst": false, 00:18:28.995 "ddgst": false, 00:18:28.995 "dhchap_key": "key1", 00:18:28.995 "dhchap_ctrlr_key": "ckey2" 00:18:28.995 } 00:18:28.995 } 00:18:28.995 Got JSON-RPC error response 00:18:28.995 GoRPCClient: error on JSON-RPC call 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.995 rmmod nvme_tcp 00:18:28.995 rmmod nvme_fabrics 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91016 ']' 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91016 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91016 ']' 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91016 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.995 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91016 00:18:28.996 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:28.996 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:28.996 killing process with pid 91016 00:18:28.996 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91016' 00:18:28.996 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91016 00:18:28.996 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91016 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:29.255 18:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:30.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:30.213 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:30.213 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:30.470 18:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eYH /tmp/spdk.key-null.tHJ /tmp/spdk.key-sha256.Z0P /tmp/spdk.key-sha384.7Ml /tmp/spdk.key-sha512.kUt /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:30.470 18:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:31.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:31.036 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:31.036 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:31.036 00:18:31.036 real 0m32.648s 00:18:31.036 user 0m29.955s 00:18:31.036 sys 0m4.950s 00:18:31.036 18:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.036 18:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.036 ************************************ 00:18:31.036 END TEST nvmf_auth_host 00:18:31.036 ************************************ 00:18:31.036 18:37:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:31.036 18:37:53 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:18:31.036 18:37:53 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:31.036 18:37:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:31.036 18:37:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.036 18:37:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:31.036 ************************************ 00:18:31.036 START TEST nvmf_digest 00:18:31.036 ************************************ 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:31.036 * Looking for test storage... 00:18:31.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.036 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:31.295 Cannot find device "nvmf_tgt_br" 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.295 Cannot find device "nvmf_tgt_br2" 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:31.295 Cannot find device "nvmf_tgt_br" 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:31.295 Cannot find device "nvmf_tgt_br2" 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:31.295 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:31.553 18:37:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:31.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:18:31.554 00:18:31.554 --- 10.0.0.2 ping statistics --- 00:18:31.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.554 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:31.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:31.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:31.554 00:18:31.554 --- 10.0.0.3 ping statistics --- 00:18:31.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.554 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:31.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:31.554 00:18:31.554 --- 10.0.0.1 ping statistics --- 00:18:31.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.554 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:31.554 ************************************ 00:18:31.554 START TEST nvmf_digest_clean 00:18:31.554 ************************************ 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=92586 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 92586 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92586 ']' 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.554 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:31.554 [2024-07-15 18:37:54.133767] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:18:31.554 [2024-07-15 18:37:54.133831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.811 [2024-07-15 18:37:54.277138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.811 [2024-07-15 18:37:54.369287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.811 [2024-07-15 18:37:54.369335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.811 [2024-07-15 18:37:54.369345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.811 [2024-07-15 18:37:54.369353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.811 [2024-07-15 18:37:54.369360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.811 [2024-07-15 18:37:54.369385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.376 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.376 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:32.376 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.376 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.376 18:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:32.633 null0 00:18:32.633 [2024-07-15 18:37:55.140629] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.633 [2024-07-15 18:37:55.164681] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92635 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92635 /var/tmp/bperf.sock 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92635 ']' 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.633 18:37:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:32.633 [2024-07-15 18:37:55.222800] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:18:32.633 [2024-07-15 18:37:55.222886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92635 ] 00:18:32.891 [2024-07-15 18:37:55.364500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.891 [2024-07-15 18:37:55.449431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.822 18:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.822 18:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:33.822 18:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:33.822 18:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:33.822 18:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:33.822 18:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:33.822 18:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:34.080 nvme0n1 00:18:34.080 18:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:34.080 18:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:34.338 Running I/O for 2 seconds... 00:18:36.238 00:18:36.238 Latency(us) 00:18:36.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.238 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:36.238 nvme0n1 : 2.00 24565.94 95.96 0.00 0.00 5205.23 2460.89 17476.27 00:18:36.238 =================================================================================================================== 00:18:36.238 Total : 24565.94 95.96 0.00 0.00 5205.23 2460.89 17476.27 00:18:36.238 0 00:18:36.238 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:36.238 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:36.238 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:36.238 | select(.opcode=="crc32c") 00:18:36.238 | "\(.module_name) \(.executed)"' 00:18:36.238 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:36.238 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92635 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92635 ']' 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92635 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92635 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92635' 00:18:36.512 killing process with pid 92635 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92635 00:18:36.512 Received shutdown signal, test time was about 2.000000 seconds 00:18:36.512 00:18:36.512 Latency(us) 00:18:36.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.512 =================================================================================================================== 00:18:36.512 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.512 18:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92635 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92727 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92727 /var/tmp/bperf.sock 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92727 ']' 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:36.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.770 18:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:36.770 [2024-07-15 18:37:59.203755] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:18:36.770 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:36.770 Zero copy mechanism will not be used. 00:18:36.770 [2024-07-15 18:37:59.203835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92727 ] 00:18:36.770 [2024-07-15 18:37:59.343618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.029 [2024-07-15 18:37:59.424874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.597 18:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.597 18:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:37.597 18:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:37.597 18:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:37.597 18:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:37.855 18:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:37.855 18:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:38.113 nvme0n1 00:18:38.113 18:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:38.113 18:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:38.113 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:38.113 Zero copy mechanism will not be used. 00:18:38.113 Running I/O for 2 seconds... 00:18:40.644 00:18:40.644 Latency(us) 00:18:40.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.644 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:40.644 nvme0n1 : 2.00 9965.72 1245.72 0.00 0.00 1602.79 500.07 2500.37 00:18:40.644 =================================================================================================================== 00:18:40.644 Total : 9965.72 1245.72 0.00 0.00 1602.79 500.07 2500.37 00:18:40.644 0 00:18:40.644 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:40.644 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:40.644 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:40.644 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:40.644 | select(.opcode=="crc32c") 00:18:40.644 | "\(.module_name) \(.executed)"' 00:18:40.644 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92727 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92727 ']' 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92727 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92727 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:40.645 killing process with pid 92727 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92727' 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92727 00:18:40.645 Received shutdown signal, test time was about 2.000000 seconds 00:18:40.645 00:18:40.645 Latency(us) 00:18:40.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.645 =================================================================================================================== 00:18:40.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.645 18:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92727 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92813 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92813 /var/tmp/bperf.sock 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92813 ']' 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.645 18:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:40.645 [2024-07-15 18:38:03.180495] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:18:40.645 [2024-07-15 18:38:03.180583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92813 ] 00:18:40.902 [2024-07-15 18:38:03.322519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.902 [2024-07-15 18:38:03.415407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.465 18:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.465 18:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:41.465 18:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:41.465 18:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:41.465 18:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:41.723 18:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:41.723 18:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:41.980 nvme0n1 00:18:42.237 18:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:42.237 18:38:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:42.237 Running I/O for 2 seconds... 00:18:44.135 00:18:44.135 Latency(us) 00:18:44.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.135 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:44.135 nvme0n1 : 2.00 29179.03 113.98 0.00 0.00 4381.19 1829.22 9843.56 00:18:44.135 =================================================================================================================== 00:18:44.135 Total : 29179.03 113.98 0.00 0.00 4381.19 1829.22 9843.56 00:18:44.135 0 00:18:44.135 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:44.135 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:44.135 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:44.135 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:44.135 | select(.opcode=="crc32c") 00:18:44.135 | "\(.module_name) \(.executed)"' 00:18:44.135 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:44.393 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:44.393 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:44.393 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:44.393 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:44.393 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92813 00:18:44.393 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92813 ']' 00:18:44.393 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92813 00:18:44.393 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:44.393 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.393 18:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92813 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92813' 00:18:44.651 killing process with pid 92813 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92813 00:18:44.651 Received shutdown signal, test time was about 2.000000 seconds 00:18:44.651 00:18:44.651 Latency(us) 00:18:44.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.651 =================================================================================================================== 00:18:44.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92813 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92903 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92903 /var/tmp/bperf.sock 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92903 ']' 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.651 18:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:44.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:44.651 Zero copy mechanism will not be used. 00:18:44.651 [2024-07-15 18:38:07.252343] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:18:44.651 [2024-07-15 18:38:07.252420] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92903 ] 00:18:44.908 [2024-07-15 18:38:07.388816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.908 [2024-07-15 18:38:07.480795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.843 18:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.843 18:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:45.843 18:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:45.843 18:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:45.843 18:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:45.844 18:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:45.844 18:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:46.102 nvme0n1 00:18:46.102 18:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:46.102 18:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:46.361 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:46.361 Zero copy mechanism will not be used. 00:18:46.361 Running I/O for 2 seconds... 00:18:48.266 00:18:48.266 Latency(us) 00:18:48.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.266 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:48.266 nvme0n1 : 2.00 9258.96 1157.37 0.00 0.00 1724.80 1302.82 5211.30 00:18:48.266 =================================================================================================================== 00:18:48.266 Total : 9258.96 1157.37 0.00 0.00 1724.80 1302.82 5211.30 00:18:48.266 0 00:18:48.266 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:48.266 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:48.266 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:48.266 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:48.266 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:48.266 | select(.opcode=="crc32c") 00:18:48.266 | "\(.module_name) \(.executed)"' 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92903 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92903 ']' 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92903 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92903 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:48.529 killing process with pid 92903 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92903' 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92903 00:18:48.529 Received shutdown signal, test time was about 2.000000 seconds 00:18:48.529 00:18:48.529 Latency(us) 00:18:48.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.529 =================================================================================================================== 00:18:48.529 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.529 18:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92903 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 92586 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92586 ']' 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92586 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92586 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:48.785 killing process with pid 92586 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92586' 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92586 00:18:48.785 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92586 00:18:49.043 00:18:49.043 real 0m17.331s 00:18:49.043 user 0m31.788s 00:18:49.043 sys 0m4.865s 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:49.043 ************************************ 00:18:49.043 END TEST nvmf_digest_clean 00:18:49.043 ************************************ 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:49.043 ************************************ 00:18:49.043 START TEST nvmf_digest_error 00:18:49.043 ************************************ 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93015 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93015 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93015 ']' 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.043 18:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:49.043 [2024-07-15 18:38:11.543836] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:18:49.043 [2024-07-15 18:38:11.543908] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.301 [2024-07-15 18:38:11.686879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.301 [2024-07-15 18:38:11.777608] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.301 [2024-07-15 18:38:11.777654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.301 [2024-07-15 18:38:11.777664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.301 [2024-07-15 18:38:11.777672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.301 [2024-07-15 18:38:11.777679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.301 [2024-07-15 18:38:11.777710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:49.864 [2024-07-15 18:38:12.453051] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.864 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:50.121 null0 00:18:50.121 [2024-07-15 18:38:12.549377] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.121 [2024-07-15 18:38:12.573435] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93059 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93059 /var/tmp/bperf.sock 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93059 ']' 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:50.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.121 18:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:50.121 [2024-07-15 18:38:12.629372] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:18:50.121 [2024-07-15 18:38:12.629446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93059 ] 00:18:50.377 [2024-07-15 18:38:12.772243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.378 [2024-07-15 18:38:12.865411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.942 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.942 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:50.942 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:50.942 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:51.201 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:51.201 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.201 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:51.201 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.201 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:51.201 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:51.459 nvme0n1 00:18:51.460 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:51.460 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.460 18:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:51.460 18:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.460 18:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:51.460 18:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:51.720 Running I/O for 2 seconds... 00:18:51.720 [2024-07-15 18:38:14.131072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.131509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.131625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.142609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.142736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.142796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.154472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.154606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.154674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.164539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.164657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.164720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.176908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.177009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.177078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.185874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.185975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.186034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.196853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.196955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.197022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.207817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.207921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.207980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.219013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.219100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.219171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.228638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.228734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.228792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.239732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.239839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.239855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.250938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.250971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.250983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.261720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.261753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.261764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.271578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.271611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.271622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.282430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.282462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.282474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.292230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.292260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.292271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.304403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.304433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.304444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.314002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.314030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.314042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.720 [2024-07-15 18:38:14.324619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.720 [2024-07-15 18:38:14.324644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.720 [2024-07-15 18:38:14.324654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.992 [2024-07-15 18:38:14.335841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.992 [2024-07-15 18:38:14.335870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.992 [2024-07-15 18:38:14.335881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.992 [2024-07-15 18:38:14.346427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.992 [2024-07-15 18:38:14.346458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.992 [2024-07-15 18:38:14.346469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.992 [2024-07-15 18:38:14.355743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.992 [2024-07-15 18:38:14.355771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.992 [2024-07-15 18:38:14.355782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.992 [2024-07-15 18:38:14.367078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.992 [2024-07-15 18:38:14.367109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.992 [2024-07-15 18:38:14.367120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.992 [2024-07-15 18:38:14.377457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.377489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.377500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.386042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.386074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.386085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.398865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.398898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.398909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.409604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.409633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.409644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.419841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.419870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.419880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.429347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.429376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.429388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.439380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.439410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.439420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.450438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.450471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.450483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.461165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.461195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.461206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.470963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.470993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.471004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.481163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.481194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.481205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.490992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.491022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.491033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.501765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.501797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.501808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.511961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.511991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.512002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.521804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.521834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.521845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.532459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.532489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.532499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.543746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.543775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.543786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.552584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.552609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.552620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.563355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.563386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.563397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.993 [2024-07-15 18:38:14.574521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.993 [2024-07-15 18:38:14.574550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.993 [2024-07-15 18:38:14.574561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.994 [2024-07-15 18:38:14.585844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.994 [2024-07-15 18:38:14.585877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.994 [2024-07-15 18:38:14.585888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.994 [2024-07-15 18:38:14.595027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:51.994 [2024-07-15 18:38:14.595056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.994 [2024-07-15 18:38:14.595067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.606150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.606178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.606189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.616871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.616903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.616914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.627411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.627443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.627454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.637726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.637755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.637766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.649300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.649331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.649342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.660405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.660434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.660445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.668839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.668868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.668879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.679074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.679102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.679113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.690002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.690033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.690044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.700672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.700695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.700706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.711044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.711075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.711086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.722216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.722247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.722258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.732220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.732249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.732260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.741653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.741683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.741693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.752360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.752391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.752401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.762765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.762795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.762806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.773584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.773614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.773625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.782923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.782951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.782962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.793832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.793860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.793871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.804933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.804962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.804973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.814417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.814446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.814457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.264 [2024-07-15 18:38:14.825909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.264 [2024-07-15 18:38:14.825937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-07-15 18:38:14.825948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.265 [2024-07-15 18:38:14.834881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.265 [2024-07-15 18:38:14.834908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.265 [2024-07-15 18:38:14.834919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.265 [2024-07-15 18:38:14.845877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.265 [2024-07-15 18:38:14.845906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.265 [2024-07-15 18:38:14.845917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.265 [2024-07-15 18:38:14.855973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.265 [2024-07-15 18:38:14.856002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.265 [2024-07-15 18:38:14.856012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.265 [2024-07-15 18:38:14.866643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.265 [2024-07-15 18:38:14.866671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.265 [2024-07-15 18:38:14.866682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.523 [2024-07-15 18:38:14.877687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.523 [2024-07-15 18:38:14.877713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.523 [2024-07-15 18:38:14.877723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.523 [2024-07-15 18:38:14.887143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.523 [2024-07-15 18:38:14.887172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.523 [2024-07-15 18:38:14.887183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.523 [2024-07-15 18:38:14.897667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.523 [2024-07-15 18:38:14.897695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.523 [2024-07-15 18:38:14.897706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.523 [2024-07-15 18:38:14.907786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.523 [2024-07-15 18:38:14.907813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.523 [2024-07-15 18:38:14.907824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.523 [2024-07-15 18:38:14.918783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.523 [2024-07-15 18:38:14.918812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.523 [2024-07-15 18:38:14.918823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.523 [2024-07-15 18:38:14.929081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.523 [2024-07-15 18:38:14.929110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.523 [2024-07-15 18:38:14.929121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:14.938858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:14.938888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:14.938899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:14.950226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:14.950258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:14.950269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:14.960395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:14.960426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:14.960437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:14.970467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:14.970498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:14.970510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:14.981723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:14.981754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:14.981765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:14.993175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:14.993207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:14.993219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.004281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.004311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.004322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.013793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.013822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.013833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.022948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.022977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.022988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.032622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.032650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.032661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.043328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.043350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.043361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.054313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.054343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.054354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.065043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.065074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.065085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.074267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.074298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.074309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.085547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.085588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.085599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.096445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.096477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.096488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.105647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.105672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.105683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.115373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.115402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.115413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.524 [2024-07-15 18:38:15.126006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.524 [2024-07-15 18:38:15.126035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-07-15 18:38:15.126046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.136842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.136870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.136882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.146487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.146517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.146528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.157299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.157330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.157341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.166014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.166043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.166054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.177106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.177136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.177147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.188371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.188399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.188410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.198656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.198684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.198694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.209261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.209290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.209300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.218344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.218373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.218384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.229210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.229239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.229250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.241182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.241212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.241223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.252863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.252893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.252904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.263720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.263748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.263759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.273813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.273842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.273853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.282840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.783 [2024-07-15 18:38:15.282868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.783 [2024-07-15 18:38:15.282879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.783 [2024-07-15 18:38:15.295107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.784 [2024-07-15 18:38:15.295135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.784 [2024-07-15 18:38:15.295146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.784 [2024-07-15 18:38:15.306110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.784 [2024-07-15 18:38:15.306138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.784 [2024-07-15 18:38:15.306149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.784 [2024-07-15 18:38:15.317235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.784 [2024-07-15 18:38:15.317265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.784 [2024-07-15 18:38:15.317276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.784 [2024-07-15 18:38:15.327799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.784 [2024-07-15 18:38:15.327828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.784 [2024-07-15 18:38:15.327839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.784 [2024-07-15 18:38:15.338344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.784 [2024-07-15 18:38:15.338373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.784 [2024-07-15 18:38:15.338384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.784 [2024-07-15 18:38:15.348183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.784 [2024-07-15 18:38:15.348213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.784 [2024-07-15 18:38:15.348224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.784 [2024-07-15 18:38:15.357878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.784 [2024-07-15 18:38:15.357905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.784 [2024-07-15 18:38:15.357916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.784 [2024-07-15 18:38:15.369111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.784 [2024-07-15 18:38:15.369140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.784 [2024-07-15 18:38:15.369151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.784 [2024-07-15 18:38:15.379642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.784 [2024-07-15 18:38:15.379671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.784 [2024-07-15 18:38:15.379682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.784 [2024-07-15 18:38:15.390605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:52.784 [2024-07-15 18:38:15.390633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.784 [2024-07-15 18:38:15.390644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.401833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.401864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.401875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.411106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.411135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.411146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.422467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.422497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.422508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.433017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.433046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.433057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.441679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.441708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.441718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.452454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.452484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.452495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.462184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.462213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.462224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.474752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.474781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.474791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.485418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.485450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.485462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.495473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.495502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.495513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.505771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.505800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.505811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.515108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.515137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.515147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.526189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.526220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.526231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.537530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.537580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.537593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.546955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.546987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.546998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.558841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.558871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.558882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.569165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.569195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.569207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.043 [2024-07-15 18:38:15.577907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.043 [2024-07-15 18:38:15.577934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.043 [2024-07-15 18:38:15.577945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.044 [2024-07-15 18:38:15.589656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.044 [2024-07-15 18:38:15.589685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.044 [2024-07-15 18:38:15.589696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.044 [2024-07-15 18:38:15.601050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.044 [2024-07-15 18:38:15.601080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.044 [2024-07-15 18:38:15.601091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.044 [2024-07-15 18:38:15.610417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.044 [2024-07-15 18:38:15.610447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.044 [2024-07-15 18:38:15.610458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.044 [2024-07-15 18:38:15.620795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.044 [2024-07-15 18:38:15.620824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.044 [2024-07-15 18:38:15.620835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.044 [2024-07-15 18:38:15.632586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.044 [2024-07-15 18:38:15.632611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.044 [2024-07-15 18:38:15.632622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.044 [2024-07-15 18:38:15.643276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.044 [2024-07-15 18:38:15.643304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.044 [2024-07-15 18:38:15.643315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.044 [2024-07-15 18:38:15.654188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.044 [2024-07-15 18:38:15.654217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.044 [2024-07-15 18:38:15.654228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.663852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.663881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.663892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.675023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.675052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.675063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.685511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.685541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.685552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.694228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.694258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.694270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.706636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.706667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.706679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.718176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.718207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.718219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.728996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.729026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.729037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.737504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.737535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.737546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.748008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.748038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.748049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.759346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.759374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.759386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.769484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.769513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.769524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.780719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.780748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.780759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.790329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.790360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.790371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.801909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.801938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.801950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.813492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.813522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.813533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.822327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.822357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.822368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.831938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.831969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.831980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.844457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.844488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.844499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.853865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.853895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.853906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.864664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.864694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.864705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.875879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.875909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.875920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.886970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.887001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.887012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.897228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.897258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.897269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.303 [2024-07-15 18:38:15.908883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.303 [2024-07-15 18:38:15.908914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.303 [2024-07-15 18:38:15.908924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:15.918651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:15.918679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:15.918690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:15.928496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:15.928525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:15.928536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:15.938551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:15.938589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:15.938600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:15.949878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:15.949907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:15.949918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:15.960084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:15.960126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:15.960137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:15.969375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:15.969405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:15.969416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:15.980022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:15.980052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:15.980063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:15.990007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:15.990036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:15.990047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:16.000503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:16.000535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:16.000546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:16.011728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:16.011758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:16.011768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:16.022336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:16.022367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:16.022378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:16.034235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:16.034264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:16.034275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:16.043821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:16.043849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:16.043860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:16.053702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:16.053732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:16.053743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:16.064772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:16.064802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:16.064814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:16.076455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:16.076487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:16.076498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:16.087026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:16.087056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.563 [2024-07-15 18:38:16.087067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.563 [2024-07-15 18:38:16.096201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.563 [2024-07-15 18:38:16.096229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.564 [2024-07-15 18:38:16.096240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.564 [2024-07-15 18:38:16.107167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16543e0) 00:18:53.564 [2024-07-15 18:38:16.107198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:53.564 [2024-07-15 18:38:16.107216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:53.564 00:18:53.564 Latency(us) 00:18:53.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.564 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:53.564 nvme0n1 : 2.00 24158.66 94.37 0.00 0.00 5293.03 2737.25 14844.30 00:18:53.564 =================================================================================================================== 00:18:53.564 Total : 24158.66 94.37 0.00 0.00 5293.03 2737.25 14844.30 00:18:53.564 0 00:18:53.564 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:53.564 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:53.564 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:53.564 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:53.564 | .driver_specific 00:18:53.564 | .nvme_error 00:18:53.564 | .status_code 00:18:53.564 | .command_transient_transport_error' 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 189 > 0 )) 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93059 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93059 ']' 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93059 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93059 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:53.823 killing process with pid 93059 00:18:53.823 Received shutdown signal, test time was about 2.000000 seconds 00:18:53.823 00:18:53.823 Latency(us) 00:18:53.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.823 =================================================================================================================== 00:18:53.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93059' 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93059 00:18:53.823 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93059 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93146 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93146 /var/tmp/bperf.sock 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93146 ']' 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.082 18:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:54.082 [2024-07-15 18:38:16.612411] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:18:54.082 [2024-07-15 18:38:16.612481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93146 ] 00:18:54.082 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:54.082 Zero copy mechanism will not be used. 00:18:54.341 [2024-07-15 18:38:16.755387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.341 [2024-07-15 18:38:16.845205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.907 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.907 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:54.907 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:54.907 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:55.165 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:55.165 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.165 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:55.165 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.165 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:55.165 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:55.423 nvme0n1 00:18:55.423 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:55.423 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.423 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:55.423 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.423 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:55.423 18:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:55.682 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:55.682 Zero copy mechanism will not be used. 00:18:55.682 Running I/O for 2 seconds... 00:18:55.682 [2024-07-15 18:38:18.063778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.063825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.063839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.067839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.067882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.067894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.072022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.072061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.072072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.075646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.075683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.075694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.078306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.078339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.078350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.081539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.081585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.081597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.085113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.085150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.085161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.089050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.089088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.089099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.092853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.092889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.092900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.094962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.094995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.095006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.098515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.098554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.098577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.101153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.101189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.101200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.104455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.104492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.104504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.682 [2024-07-15 18:38:18.108117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.682 [2024-07-15 18:38:18.108154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.682 [2024-07-15 18:38:18.108165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.111995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.112032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.112042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.115407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.115442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.115453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.117602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.117631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.117641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.121452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.121490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.121501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.124149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.124186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.124196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.127359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.127393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.127404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.131253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.131287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.131298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.134796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.134829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.134839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.137383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.137419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.137430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.140587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.140623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.140634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.143520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.143557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.143580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.146493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.146526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.146537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.149759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.149793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.149804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.152810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.152844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.152854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.156013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.156051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.156063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.159254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.159287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.159299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.161799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.161832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.161843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.165055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.165092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.165103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.168312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.168350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.168361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.170943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.170975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.170986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.174118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.174151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.174162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.683 [2024-07-15 18:38:18.177080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.683 [2024-07-15 18:38:18.177116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.683 [2024-07-15 18:38:18.177127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.180328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.180364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.180375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.183415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.183450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.183461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.187030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.187064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.187074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.189315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.189349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.189360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.192425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.192463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.192474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.195322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.195356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.195367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.198660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.198693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.198704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.201356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.201392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.201403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.204082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.204118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.204129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.207594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.207627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.207638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.210124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.210156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.210167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.213152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.213187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.213198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.216741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.216777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.216788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.219171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.219211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.219222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.222306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.222340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.222351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.225688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.225723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.225734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.228517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.228554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.228578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.231978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.232016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.232027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.234861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.234893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.234904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.237957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.237995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.238006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.241238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.241273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.684 [2024-07-15 18:38:18.241284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.684 [2024-07-15 18:38:18.244442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.684 [2024-07-15 18:38:18.244480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.244490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.247216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.247248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.247259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.250371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.250405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.250416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.254067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.254104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.254115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.257652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.257686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.257697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.259992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.260026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.260037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.263337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.263371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.263383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.267130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.267166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.267177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.269635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.269674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.269686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.272909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.272944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.272955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.276698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.276736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.276747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.279242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.279275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.279286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.282512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.282547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.282558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.285391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.285427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.285438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.288349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.288386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.288397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.685 [2024-07-15 18:38:18.291780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.685 [2024-07-15 18:38:18.291816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.685 [2024-07-15 18:38:18.291827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.945 [2024-07-15 18:38:18.294520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.945 [2024-07-15 18:38:18.294555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.945 [2024-07-15 18:38:18.294578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.945 [2024-07-15 18:38:18.298075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.945 [2024-07-15 18:38:18.298114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.945 [2024-07-15 18:38:18.298125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.945 [2024-07-15 18:38:18.300579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.945 [2024-07-15 18:38:18.300613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.945 [2024-07-15 18:38:18.300624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.945 [2024-07-15 18:38:18.303781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.945 [2024-07-15 18:38:18.303819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.945 [2024-07-15 18:38:18.303830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.945 [2024-07-15 18:38:18.307732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.945 [2024-07-15 18:38:18.307769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.945 [2024-07-15 18:38:18.307780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.945 [2024-07-15 18:38:18.311325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.945 [2024-07-15 18:38:18.311359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.945 [2024-07-15 18:38:18.311369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.945 [2024-07-15 18:38:18.313636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.945 [2024-07-15 18:38:18.313676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.945 [2024-07-15 18:38:18.313687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.945 [2024-07-15 18:38:18.317461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.945 [2024-07-15 18:38:18.317498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.945 [2024-07-15 18:38:18.317509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.945 [2024-07-15 18:38:18.320423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.945 [2024-07-15 18:38:18.320460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.320471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.323186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.323228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.323240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.326657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.326692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.326703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.329389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.329426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.329438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.332414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.332450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.332461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.335632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.335667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.335678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.338235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.338269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.338280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.341770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.341808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.341819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.344611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.344647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.344658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.347083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.347117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.347128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.351007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.351041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.351052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.354373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.354408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.354419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.356788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.356822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.356833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.360808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.360844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.360856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.363338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.363373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.363384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.366513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.366547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.366558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.369964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.369998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.370010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.372715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.372745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.372756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.375939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.375973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.375984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.379939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.379973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.379984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.383729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.383765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.383776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.386151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.386182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.386193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.389184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.389218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.389229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.392816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.392850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.392861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.396658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.396692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.396704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.399003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.399034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.399045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.402398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.402432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.402443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.405678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.405711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.405722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.408871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.408905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.946 [2024-07-15 18:38:18.408916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.946 [2024-07-15 18:38:18.411827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.946 [2024-07-15 18:38:18.411862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.411873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.415241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.415273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.415284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.418256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.418289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.418300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.421548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.421590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.421601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.424799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.424834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.424845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.427686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.427720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.427731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.430811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.430845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.430856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.434132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.434161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.434172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.436552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.436591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.436603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.439645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.439677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.439688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.442623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.442649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.442660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.445947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.445983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.445994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.448303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.448338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.448349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.451436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.451470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.451482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.455119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.455155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.455166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.457476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.457511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.457522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.460985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.461019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.461030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.463688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.463721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.463733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.466849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.466882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.466894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.469750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.469784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.469795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.473142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.473176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.473187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.476269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.476298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.476309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.479187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.479228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.479239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.482484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.482516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.482527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.485894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.485927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.485937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.488619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.488650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.488662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.491948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.491982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.491993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.494856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.494888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.494898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.497877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.497908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.947 [2024-07-15 18:38:18.497919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.947 [2024-07-15 18:38:18.500686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.947 [2024-07-15 18:38:18.500719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.500730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.503665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.503697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.503708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.506866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.506897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.506908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.509443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.509474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.509484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.512384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.512421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.512432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.515189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.515229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.515240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.518785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.518819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.518831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.522378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.522413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.522424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.524530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.524583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.524595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.528296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.528335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.528346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.531227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.531258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.531269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.534614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.534646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.534657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.538469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.538506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.538517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.542321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.542357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.542368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.545797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.545833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.545844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.548237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.548272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.548282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.551702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.551740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.551751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:55.948 [2024-07-15 18:38:18.554282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:55.948 [2024-07-15 18:38:18.554317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.948 [2024-07-15 18:38:18.554327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.557795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.557832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.557843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.561425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.561461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.561472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.565032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.565067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.565078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.567153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.567187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.567198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.570940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.570975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.570986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.573993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.574030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.574041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.577137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.577177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.577187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.579597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.579631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.579642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.582509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.582543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.582554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.586186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.586224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.586235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.590169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.590207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.590218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.594006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.594044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.594055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.596182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.596219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.596229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.599886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.599923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.208 [2024-07-15 18:38:18.599934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.208 [2024-07-15 18:38:18.603554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.208 [2024-07-15 18:38:18.603605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.603616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.606667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.606699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.606709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.609359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.609395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.609406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.612953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.612989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.613000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.615344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.615379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.615390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.618613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.618646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.618656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.622063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.622099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.622109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.625246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.625282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.625293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.628170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.628207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.628218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.631609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.631644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.631654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.635212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.635245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.635256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.638860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.638894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.638905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.641194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.641228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.641238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.645107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.645144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.645155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.649148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.649185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.649196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.652993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.653031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.653042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.656362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.656398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.656409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.658814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.658847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.658858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.662190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.662226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.662237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.664983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.665021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.665032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.667369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.667404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.667415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.670765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.670800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.670811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.674643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.674678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.674690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.678346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.678384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.678396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.680520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.680555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.680576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.684442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.684479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.684490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.688349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.688386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.688397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.691039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.691071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.691082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.694129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.209 [2024-07-15 18:38:18.694164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.209 [2024-07-15 18:38:18.694175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.209 [2024-07-15 18:38:18.697838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.697875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.697886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.701084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.701119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.701130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.703967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.704004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.704015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.707325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.707360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.707371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.709991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.710025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.710036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.713066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.713103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.713114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.716238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.716276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.716287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.718631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.718662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.718673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.721644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.721680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.721690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.725247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.725285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.725295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.727630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.727664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.727674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.730681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.730713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.730724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.733695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.733730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.733741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.736971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.737006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.737017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.739445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.739479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.739490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.743252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.743286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.743296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.747227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.747261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.747271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.750943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.750978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.750989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.753315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.753347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.753357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.756584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.756620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.756630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.760226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.760266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.760277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.764168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.764206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.764217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.766541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.766581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.766593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.769953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.769987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.769998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.773956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.773992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.774003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.777175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.777211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.777222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.779613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.779646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.779657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.783074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.210 [2024-07-15 18:38:18.783110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.210 [2024-07-15 18:38:18.783121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.210 [2024-07-15 18:38:18.786151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.786185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.786196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.788933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.788967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.788979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.791875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.791911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.791922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.794911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.794945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.794956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.798157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.798191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.798201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.801347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.801382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.801392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.804555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.804599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.804611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.807384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.807420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.807431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.810384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.810418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.810430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.813733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.813766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.813777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.816324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.816360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.816371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.211 [2024-07-15 18:38:18.819269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.211 [2024-07-15 18:38:18.819303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.211 [2024-07-15 18:38:18.819314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.822615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.822646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.822657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.826065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.826097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.826108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.828170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.828202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.828212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.832215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.832252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.832263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.835844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.835882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.835893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.838484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.838516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.838527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.841822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.841857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.841867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.845562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.845607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.845617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.847936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.847971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.847981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.851150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.851183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.851194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.854871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.854904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.854915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.858579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.858611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.858621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.861076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.861110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.861121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.864416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.864451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.864462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.867684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.867719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.867730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.870183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.870214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.870225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.471 [2024-07-15 18:38:18.872872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.471 [2024-07-15 18:38:18.872909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.471 [2024-07-15 18:38:18.872920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.875631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.875667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.875678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.878873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.878907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.878917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.881786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.881819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.881831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.884699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.884735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.884746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.887756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.887794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.887806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.890754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.890787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.890798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.893369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.893404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.893415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.896736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.896772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.896783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.900474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.900510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.900521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.903085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.903117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.903128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.906262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.906296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.906308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.909250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.909286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.909296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.912448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.912484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.912496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.915587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.915622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.915633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.918082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.918116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.918127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.921434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.921469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.921481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.924322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.924358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.924369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.927671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.927708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.927719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.930276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.930308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.930319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.933483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.933521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.933531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.937317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.937355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.937366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.940997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.941033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.941044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.943143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.943174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.943185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.946682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.946715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.946726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.950520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.950558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.950581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.953163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.953199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.953210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.472 [2024-07-15 18:38:18.956226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.472 [2024-07-15 18:38:18.956265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.472 [2024-07-15 18:38:18.956276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.959770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.959807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.959818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.962347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.962380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.962391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.965626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.965661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.965672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.969177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.969214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.969225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.971862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.971897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.971908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.974998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.975032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.975043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.978038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.978072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.978083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.980983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.981017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.981028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.984649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.984686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.984697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.987290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.987325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.987336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.990725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.990760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.990771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.994458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.994494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.994506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:18.997222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:18.997257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:18.997268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.000577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.000613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.000624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.004294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.004332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.004343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.007257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.007291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.007302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.010238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.010272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.010283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.013854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.013892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.013903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.016281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.016317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.016328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.019524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.019560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.019583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.022598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.022629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.022640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.025206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.025241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.025252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.028715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.028753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.028764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.031335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.031370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.031380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.034356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.034390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.034401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.037928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.037966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.037977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.473 [2024-07-15 18:38:19.041533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.473 [2024-07-15 18:38:19.041580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.473 [2024-07-15 18:38:19.041591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.043946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.043982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.043993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.047746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.047784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.047795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.051421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.051459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.051470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.055131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.055163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.055175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.057415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.057450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.057461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.060960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.060997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.061008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.064224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.064260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.064271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.066772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.066806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.066817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.070242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.070277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.070287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.073951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.073988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.073999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.077840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.077878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.077889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.474 [2024-07-15 18:38:19.080520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.474 [2024-07-15 18:38:19.080554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.474 [2024-07-15 18:38:19.080575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.083639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.083672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.083684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.087329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.087365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.087376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.091130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.091166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.091178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.093857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.093891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.093902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.097063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.097100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.097111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.100521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.100558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.100581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.103432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.103470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.103481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.106494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.106528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.106539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.109882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.109917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.109928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.112715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.112750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.112761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.115941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.115977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.115988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.118706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.118739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.118750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.121721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.121757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.121768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.124610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.124643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.124654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.128113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.128152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.128163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.131803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.734 [2024-07-15 18:38:19.131839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.734 [2024-07-15 18:38:19.131850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.734 [2024-07-15 18:38:19.134108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.134142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.134153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.137910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.137947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.137958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.141817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.141854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.141865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.144212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.144248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.144259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.147544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.147589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.147600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.151534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.151582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.151594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.155279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.155313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.155324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.157659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.157691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.157701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.160957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.160993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.161004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.164426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.164462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.164473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.166740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.166773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.166784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.170068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.170103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.170114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.173431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.173469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.173479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.176411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.176446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.176457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.179596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.179628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.179639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.182264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.182300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.182311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.185723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.185757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.185768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.188431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.735 [2024-07-15 18:38:19.188466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.735 [2024-07-15 18:38:19.188477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.735 [2024-07-15 18:38:19.191506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.191544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.191555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.194761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.194796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.194807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.197067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.197101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.197111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.200387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.200424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.200435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.204085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.204123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.204134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.207859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.207896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.207906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.210127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.210160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.210170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.213793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.213830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.213841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.217339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.217375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.217386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.220723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.220758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.220769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.223128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.223162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.223172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.226399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.226433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.226444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.229744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.229780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.229791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.232719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.232755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.232766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.235931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.235968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.235979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.238886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.238920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.238931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.242215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.242247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.242258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.244492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.244526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.244537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.247586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.247618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.247629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.250037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.250069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.250080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.253475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.253511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.736 [2024-07-15 18:38:19.253522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.736 [2024-07-15 18:38:19.256637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.736 [2024-07-15 18:38:19.256673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.256684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.259564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.259606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.259617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.262040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.262072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.262083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.265279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.265313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.265324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.268946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.268983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.268993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.272531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.272577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.272589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.274669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.274698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.274708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.278197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.278233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.278244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.280694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.280737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.280748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.284010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.284046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.284057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.287222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.287254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.287264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.290203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.290234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.290245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.293088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.293122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.293133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.295788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.295822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.295833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.299472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.299507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.299518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.303157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.303190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.303201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.306460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.306493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.306503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.309148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.309181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.309191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.312617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.312651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.312662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.315828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.315864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.315875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.318447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.318479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.318490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.321550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.321594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.321605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.324630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.324662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.324673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.327112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.327145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.327156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.330139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.330174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.330185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.333073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.333110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.333121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.336209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.336246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.336257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.339053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.339085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.737 [2024-07-15 18:38:19.339096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.737 [2024-07-15 18:38:19.342010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.737 [2024-07-15 18:38:19.342044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.738 [2024-07-15 18:38:19.342055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.738 [2024-07-15 18:38:19.345197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.738 [2024-07-15 18:38:19.345234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.738 [2024-07-15 18:38:19.345245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.347783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.347818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.996 [2024-07-15 18:38:19.347829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.351524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.351561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.996 [2024-07-15 18:38:19.351583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.355257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.355291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.996 [2024-07-15 18:38:19.355301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.358582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.358612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.996 [2024-07-15 18:38:19.358623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.360708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.360740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.996 [2024-07-15 18:38:19.360750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.364492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.364528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.996 [2024-07-15 18:38:19.364539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.367881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.367917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.996 [2024-07-15 18:38:19.367928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.371776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.371813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.996 [2024-07-15 18:38:19.371824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.374522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.374553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.996 [2024-07-15 18:38:19.374574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.377859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.377895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.996 [2024-07-15 18:38:19.377906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.996 [2024-07-15 18:38:19.381547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.996 [2024-07-15 18:38:19.381594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.381605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.385022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.385059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.385070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.387726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.387758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.387768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.390749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.390782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.390793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.394354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.394390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.394400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.396728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.396764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.396774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.399869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.399906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.399916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.402404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.402436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.402447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.405953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.405987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.405998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.408746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.408782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.408793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.412066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.412101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.412113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.414928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.414962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.414973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.418325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.418359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.418370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.421316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.421352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.421363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.424490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.424526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.424537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.427695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.427730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.427740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.430582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.430612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.430622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.433949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.433985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.433996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.436953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.436990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.437000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.439795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.439831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.439842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.442998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.443032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.443042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.445471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.445504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.445515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.448705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.448739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.448750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.451864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.451900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.451911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.454559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.454600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.454611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.457941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.457976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.457986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.461621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.461656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.461667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.465369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.465408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.465419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.468120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.468154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.468165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.471103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.471135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.471146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.475082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.475116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.475127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.478186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.478218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.478229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.480256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.480289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.480300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.483971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.997 [2024-07-15 18:38:19.484007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.997 [2024-07-15 18:38:19.484018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.997 [2024-07-15 18:38:19.486856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.486891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.486902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.490212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.490247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.490258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.493404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.493442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.493453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.496118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.496154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.496164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.499793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.499831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.499841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.503308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.503343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.503353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.505945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.505978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.505989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.509172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.509209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.509220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.512985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.513023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.513033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.515747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.515791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.518982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.519016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.519026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.522784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.522818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.522830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.525168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.525203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.525214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.528065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.528101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.528112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.531337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.531393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.531404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.534315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.534348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.534358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.537229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.537265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.537276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.539890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.539924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.539935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.543626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.543660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.543670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.547157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.547190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.547202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.550883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.550918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.550929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.553632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.553661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.553672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.556685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.556719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.556730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.560591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.560621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.560632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.564538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.564585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.564596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.568227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.568263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.568274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.570327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.570360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.570370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.574096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.574130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.574141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.577420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.577454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.577465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.580137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.580172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.580183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.583588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.583622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.583632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.587552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.587598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.587610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.591531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.591577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.998 [2024-07-15 18:38:19.591588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.998 [2024-07-15 18:38:19.594257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.998 [2024-07-15 18:38:19.594291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.999 [2024-07-15 18:38:19.594302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.999 [2024-07-15 18:38:19.597513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.999 [2024-07-15 18:38:19.597548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.999 [2024-07-15 18:38:19.597559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.999 [2024-07-15 18:38:19.601269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.999 [2024-07-15 18:38:19.601307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.999 [2024-07-15 18:38:19.601317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.999 [2024-07-15 18:38:19.605009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.999 [2024-07-15 18:38:19.605044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.999 [2024-07-15 18:38:19.605055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.999 [2024-07-15 18:38:19.608507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:56.999 [2024-07-15 18:38:19.608544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.999 [2024-07-15 18:38:19.608556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.610873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.610903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.610914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.614084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.614119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.614130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.617943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.617980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.617991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.621779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.621816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.621827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.625283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.625318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.625330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.627519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.627552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.627575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.630925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.630959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.630971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.634782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.634818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.634828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.638593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.638628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.638639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.641306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.641340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.641351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.644325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.258 [2024-07-15 18:38:19.644361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.258 [2024-07-15 18:38:19.644372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.258 [2024-07-15 18:38:19.647523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.647561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.647585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.650180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.650217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.650228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.653484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.653520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.653531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.656400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.656436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.656447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.659288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.659321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.659332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.662852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.662886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.662897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.666118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.666151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.666162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.668585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.668618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.668629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.671682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.671718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.671729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.674828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.674863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.674873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.678083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.678118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.678129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.680620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.680652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.680663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.683813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.683850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.683861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.686466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.686499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.686509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.690465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.690503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.690514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.694490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.694524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.694536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.698423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.698462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.698473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.700939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.700974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.700985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.703958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.703997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.704008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.708062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.708101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.708113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.711786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.711823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.711834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.715400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.715438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.715450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.718882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.718918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.718929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.721179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.721215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.721227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.724757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.724794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.724805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.259 [2024-07-15 18:38:19.728386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.259 [2024-07-15 18:38:19.728422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.259 [2024-07-15 18:38:19.728433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.731067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.731100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.731111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.734480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.734513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.734524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.737864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.737899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.737910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.740424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.740461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.740472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.743556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.743604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.743615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.747405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.747442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.747453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.749955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.749986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.749997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.753294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.753331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.753341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.756172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.756208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.756219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.759239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.759273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.759284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.762517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.762551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.762562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.765428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.765461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.765472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.768787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.768821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.768832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.771558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.771603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.771614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.774857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.774891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.774902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.777412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.777445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.777457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.782491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.782527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.782538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.786156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.786192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.786203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.790110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.790146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.790157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.793860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.793897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.793909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.796309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.796343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.796354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.799735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.799769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.799780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.802951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.802985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.802996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.805401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.805436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.805446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.809192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.809227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.809238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.812979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.813015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.813026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.816398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.816433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.260 [2024-07-15 18:38:19.816444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.260 [2024-07-15 18:38:19.820029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.260 [2024-07-15 18:38:19.820064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.820075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.822690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.822716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.822727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.825687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.825719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.825730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.829414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.829450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.829461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.832258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.832293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.832304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.835395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.835430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.835441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.838791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.838826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.838837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.842190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.842221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.842232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.844878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.844912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.844923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.848166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.848200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.848211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.850810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.850843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.850854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.854144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.854180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.854191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.857318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.857353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.857364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.859936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.859969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.859981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.863440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.863475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.863486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.261 [2024-07-15 18:38:19.866977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.261 [2024-07-15 18:38:19.867011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.261 [2024-07-15 18:38:19.867022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.870840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.870875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.870885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.873549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.873595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.873607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.876818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.876854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.876865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.880022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.880057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.880068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.882879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.882912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.882923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.886127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.886161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.886172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.889519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.889555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.889576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.892321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.892356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.892367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.895656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.895689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.895700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.899023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.899057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.899068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.901478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.901513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.901525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.905378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.905415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.905426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.908884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.908919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.908931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.912322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.912359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.912370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.915022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.915052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.915063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.918249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.918283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.918294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.921733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.921767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.921777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.924378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.924411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.924422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.927192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.927237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.927248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.930264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.521 [2024-07-15 18:38:19.930298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.521 [2024-07-15 18:38:19.930309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.521 [2024-07-15 18:38:19.933028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.933178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.933303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.936296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.936438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.936589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.940017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.940166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.940249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.943855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.944005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.944085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.947562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.947715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.947808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.950203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.950347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.950471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.953285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.953434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.953516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.956800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.956943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.956958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.959425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.959461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.959472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.962699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.962732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.962743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.965423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.965458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.965469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.968457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.968492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.968503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.970851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.970884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.970894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.974315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.974350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.974361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.977939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.977973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.977984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.981229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.981262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.981273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.984904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.984938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.984949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.987282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.987315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.987326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.991125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.991160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.991170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.994899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.994934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.994945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:19.997401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:19.997434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:19.997445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:20.000587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:20.000619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:20.000630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:20.004378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:20.004413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:20.004424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:20.008295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:20.008331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:20.008342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.522 [2024-07-15 18:38:20.010868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.522 [2024-07-15 18:38:20.010899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.522 [2024-07-15 18:38:20.010910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.014107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.014142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.014153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.017955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.017990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.018000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.021119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.021152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.021163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.023366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.023399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.023410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.026814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.026849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.026860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.030397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.030433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.030443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.034031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.034065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.034076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.036227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.036261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.036271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.039580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.039728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.039796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.043104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.043256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.043338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.046116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.046258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.046343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.049467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.049626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.049733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.523 [2024-07-15 18:38:20.052697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d2380) 00:18:57.523 [2024-07-15 18:38:20.052845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.523 [2024-07-15 18:38:20.052952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.523 00:18:57.523 Latency(us) 00:18:57.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.523 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:57.523 nvme0n1 : 2.00 9738.14 1217.27 0.00 0.00 1640.24 467.17 5369.21 00:18:57.523 =================================================================================================================== 00:18:57.523 Total : 9738.14 1217.27 0.00 0.00 1640.24 467.17 5369.21 00:18:57.523 0 00:18:57.523 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:57.523 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:57.523 | .driver_specific 00:18:57.523 | .nvme_error 00:18:57.523 | .status_code 00:18:57.523 | .command_transient_transport_error' 00:18:57.523 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:57.523 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 628 > 0 )) 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93146 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93146 ']' 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93146 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93146 00:18:57.782 killing process with pid 93146 00:18:57.782 Received shutdown signal, test time was about 2.000000 seconds 00:18:57.782 00:18:57.782 Latency(us) 00:18:57.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.782 =================================================================================================================== 00:18:57.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93146' 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93146 00:18:57.782 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93146 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93231 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93231 /var/tmp/bperf.sock 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93231 ']' 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:58.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.039 18:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:58.039 [2024-07-15 18:38:20.579088] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:18:58.039 [2024-07-15 18:38:20.579327] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93231 ] 00:18:58.297 [2024-07-15 18:38:20.704645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.297 [2024-07-15 18:38:20.794237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.885 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.885 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:58.885 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:58.885 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:59.142 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:59.142 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.142 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:59.142 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.142 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:59.142 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:59.399 nvme0n1 00:18:59.399 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:59.399 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.399 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:59.399 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.399 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:59.399 18:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:59.657 Running I/O for 2 seconds... 00:18:59.657 [2024-07-15 18:38:22.041640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ee5c8 00:18:59.657 [2024-07-15 18:38:22.042344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.042380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.050102] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fac10 00:18:59.657 [2024-07-15 18:38:22.050801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.050835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.059135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f35f0 00:18:59.657 [2024-07-15 18:38:22.059822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.059852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.067821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eff18 00:18:59.657 [2024-07-15 18:38:22.068489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.068520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.076831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e4de8 00:18:59.657 [2024-07-15 18:38:22.077499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.077529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.086121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f4298 00:18:59.657 [2024-07-15 18:38:22.086907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.086938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.094751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e4de8 00:18:59.657 [2024-07-15 18:38:22.095430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.095461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.105274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190dfdc0 00:18:59.657 [2024-07-15 18:38:22.106553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.106589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.114264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ec840 00:18:59.657 [2024-07-15 18:38:22.115551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.115593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.122707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f31b8 00:18:59.657 [2024-07-15 18:38:22.123884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.123915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.129185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fb8b8 00:18:59.657 [2024-07-15 18:38:22.129737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.129767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.139818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7538 00:18:59.657 [2024-07-15 18:38:22.140879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.140909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.148799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e7c50 00:18:59.657 [2024-07-15 18:38:22.149840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.149870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.155875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fa3a0 00:18:59.657 [2024-07-15 18:38:22.156427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.156455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.165314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f4298 00:18:59.657 [2024-07-15 18:38:22.165985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.166015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.175111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f9b30 00:18:59.657 [2024-07-15 18:38:22.176166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.176195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.183545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fa3a0 00:18:59.657 [2024-07-15 18:38:22.184450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.184481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.192175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7100 00:18:59.657 [2024-07-15 18:38:22.192977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.193007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.202669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fac10 00:18:59.657 [2024-07-15 18:38:22.204074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.204107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.209385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190efae0 00:18:59.657 [2024-07-15 18:38:22.210164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.210194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.220614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190efae0 00:18:59.657 [2024-07-15 18:38:22.221772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.221803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.229888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f6890 00:18:59.657 [2024-07-15 18:38:22.231166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.231198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.236213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fa3a0 00:18:59.657 [2024-07-15 18:38:22.236773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.236801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.246857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f96f8 00:18:59.657 [2024-07-15 18:38:22.247922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.247953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.256120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f57b0 00:18:59.657 [2024-07-15 18:38:22.257295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.657 [2024-07-15 18:38:22.257324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:59.657 [2024-07-15 18:38:22.265366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f2510 00:18:59.658 [2024-07-15 18:38:22.266675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.658 [2024-07-15 18:38:22.266706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.271696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ec408 00:18:59.916 [2024-07-15 18:38:22.272275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.272306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.280698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f96f8 00:18:59.916 [2024-07-15 18:38:22.281266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.281294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.290812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fcdd0 00:18:59.916 [2024-07-15 18:38:22.291848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.291882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.299516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f0ff8 00:18:59.916 [2024-07-15 18:38:22.300339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.300369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.307957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ed920 00:18:59.916 [2024-07-15 18:38:22.308664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.308697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.318562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190df550 00:18:59.916 [2024-07-15 18:38:22.319898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.319927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.324887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eff18 00:18:59.916 [2024-07-15 18:38:22.325483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.325513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.334137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190de038 00:18:59.916 [2024-07-15 18:38:22.334851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.334881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.343129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ea248 00:18:59.916 [2024-07-15 18:38:22.343861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.343891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.352306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ff3c8 00:18:59.916 [2024-07-15 18:38:22.352810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.352841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.362443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fa3a0 00:18:59.916 [2024-07-15 18:38:22.363547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.363591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.370257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e2c28 00:18:59.916 [2024-07-15 18:38:22.371675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.371705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.380100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e95a0 00:18:59.916 [2024-07-15 18:38:22.381183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.381214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.388243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f3a28 00:18:59.916 [2024-07-15 18:38:22.389209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.389237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.397352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e99d8 00:18:59.916 [2024-07-15 18:38:22.398077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.398108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.405769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e4140 00:18:59.916 [2024-07-15 18:38:22.406377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.406408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.416269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190de470 00:18:59.916 [2024-07-15 18:38:22.417724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.417755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.422608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ea248 00:18:59.916 [2024-07-15 18:38:22.423326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.423355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.431827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e0ea0 00:18:59.916 [2024-07-15 18:38:22.432669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.432700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.442431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ee5c8 00:18:59.916 [2024-07-15 18:38:22.443787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.443816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.451665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fe720 00:18:59.916 [2024-07-15 18:38:22.453131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.453159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.457982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f4f40 00:18:59.916 [2024-07-15 18:38:22.458725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.458755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:59.916 [2024-07-15 18:38:22.466977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190de470 00:18:59.916 [2024-07-15 18:38:22.467722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.916 [2024-07-15 18:38:22.467754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:59.917 [2024-07-15 18:38:22.475405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190df118 00:18:59.917 [2024-07-15 18:38:22.476041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.917 [2024-07-15 18:38:22.476070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:59.917 [2024-07-15 18:38:22.484092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f92c0 00:18:59.917 [2024-07-15 18:38:22.484721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.917 [2024-07-15 18:38:22.484749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:59.917 [2024-07-15 18:38:22.494684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f0788 00:18:59.917 [2024-07-15 18:38:22.495828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.917 [2024-07-15 18:38:22.495860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:59.917 [2024-07-15 18:38:22.503943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ddc00 00:18:59.917 [2024-07-15 18:38:22.505196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.917 [2024-07-15 18:38:22.505226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:59.917 [2024-07-15 18:38:22.512213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190df118 00:18:59.917 [2024-07-15 18:38:22.513244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.917 [2024-07-15 18:38:22.513277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.917 [2024-07-15 18:38:22.520860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f6cc8 00:18:59.917 [2024-07-15 18:38:22.521767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.917 [2024-07-15 18:38:22.521797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:00.175 [2024-07-15 18:38:22.529000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7da8 00:19:00.175 [2024-07-15 18:38:22.529776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.175 [2024-07-15 18:38:22.529806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:00.175 [2024-07-15 18:38:22.537445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fd640 00:19:00.175 [2024-07-15 18:38:22.538116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.175 [2024-07-15 18:38:22.538145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:00.175 [2024-07-15 18:38:22.546092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e6300 00:19:00.175 [2024-07-15 18:38:22.546758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.175 [2024-07-15 18:38:22.546788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:00.175 [2024-07-15 18:38:22.556758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f2948 00:19:00.175 [2024-07-15 18:38:22.557930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.175 [2024-07-15 18:38:22.557962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:00.175 [2024-07-15 18:38:22.565057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fe720 00:19:00.176 [2024-07-15 18:38:22.566016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.566052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.573723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7100 00:19:00.176 [2024-07-15 18:38:22.574537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.574575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.583526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f92c0 00:19:00.176 [2024-07-15 18:38:22.584714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.584745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.591955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ec408 00:19:00.176 [2024-07-15 18:38:22.593026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.593057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.600362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f6020 00:19:00.176 [2024-07-15 18:38:22.601311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.601341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.609810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f35f0 00:19:00.176 [2024-07-15 18:38:22.610856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.610888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.618234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ddc00 00:19:00.176 [2024-07-15 18:38:22.619169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.619201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.626373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190dfdc0 00:19:00.176 [2024-07-15 18:38:22.627192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.627229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.635699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190df118 00:19:00.176 [2024-07-15 18:38:22.636623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.636653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.644323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e5220 00:19:00.176 [2024-07-15 18:38:22.645272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.645301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.653578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190df550 00:19:00.176 [2024-07-15 18:38:22.654648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.654678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.662807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f4b08 00:19:00.176 [2024-07-15 18:38:22.664001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.664031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.670662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e3498 00:19:00.176 [2024-07-15 18:38:22.672043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.672076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.678337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e5220 00:19:00.176 [2024-07-15 18:38:22.678930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.678960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.687320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f0bc0 00:19:00.176 [2024-07-15 18:38:22.687910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.687938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.697771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eee38 00:19:00.176 [2024-07-15 18:38:22.698480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.698512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.706195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fbcf0 00:19:00.176 [2024-07-15 18:38:22.706830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.706860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.716629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190df550 00:19:00.176 [2024-07-15 18:38:22.718063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.718091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.722948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e01f8 00:19:00.176 [2024-07-15 18:38:22.723675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.723706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.733514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f8618 00:19:00.176 [2024-07-15 18:38:22.734632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.734663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.741930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f3e60 00:19:00.176 [2024-07-15 18:38:22.742899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.742929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.750774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f96f8 00:19:00.176 [2024-07-15 18:38:22.751505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.751537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.759248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f92c0 00:19:00.176 [2024-07-15 18:38:22.759896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.759927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.767715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f2510 00:19:00.176 [2024-07-15 18:38:22.768202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.768233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.776924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fb480 00:19:00.176 [2024-07-15 18:38:22.777534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.777577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:00.176 [2024-07-15 18:38:22.785742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fda78 00:19:00.176 [2024-07-15 18:38:22.786603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.176 [2024-07-15 18:38:22.786630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.795862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e4de8 00:19:00.435 [2024-07-15 18:38:22.797186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.797217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.804623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e0a68 00:19:00.435 [2024-07-15 18:38:22.805724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.805753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.813300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e23b8 00:19:00.435 [2024-07-15 18:38:22.814396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.814426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.821670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e5a90 00:19:00.435 [2024-07-15 18:38:22.822554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.822595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.830509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190df988 00:19:00.435 [2024-07-15 18:38:22.831444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.831477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.841496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e9168 00:19:00.435 [2024-07-15 18:38:22.842883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.842917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.847893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e9168 00:19:00.435 [2024-07-15 18:38:22.848582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.848620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.856918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f46d0 00:19:00.435 [2024-07-15 18:38:22.857579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.857609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.865365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f9b30 00:19:00.435 [2024-07-15 18:38:22.865921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.865950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.874046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f4f40 00:19:00.435 [2024-07-15 18:38:22.874593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.874624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.884724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ea248 00:19:00.435 [2024-07-15 18:38:22.885776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.885807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.893018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e5220 00:19:00.435 [2024-07-15 18:38:22.893836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.893866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.901682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f1868 00:19:00.435 [2024-07-15 18:38:22.902503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.902532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.910948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f4b08 00:19:00.435 [2024-07-15 18:38:22.911906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.911936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.921594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fc128 00:19:00.435 [2024-07-15 18:38:22.923057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.923089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.927923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e7c50 00:19:00.435 [2024-07-15 18:38:22.928523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.928553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.936350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f8a50 00:19:00.435 [2024-07-15 18:38:22.936949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.936978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.947008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e8d30 00:19:00.435 [2024-07-15 18:38:22.948003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.948028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.955419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7970 00:19:00.435 [2024-07-15 18:38:22.956386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.956414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.966020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e2c28 00:19:00.435 [2024-07-15 18:38:22.967494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.435 [2024-07-15 18:38:22.967524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:00.435 [2024-07-15 18:38:22.972329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e73e0 00:19:00.435 [2024-07-15 18:38:22.973085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.436 [2024-07-15 18:38:22.973115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:00.436 [2024-07-15 18:38:22.981339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e95a0 00:19:00.436 [2024-07-15 18:38:22.982092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.436 [2024-07-15 18:38:22.982121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:00.436 [2024-07-15 18:38:22.990546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fc998 00:19:00.436 [2024-07-15 18:38:22.991074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.436 [2024-07-15 18:38:22.991103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:00.436 [2024-07-15 18:38:22.999760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fc560 00:19:00.436 [2024-07-15 18:38:23.000368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.436 [2024-07-15 18:38:23.000399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:00.436 [2024-07-15 18:38:23.008591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190df118 00:19:00.436 [2024-07-15 18:38:23.009443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.436 [2024-07-15 18:38:23.009472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:00.436 [2024-07-15 18:38:23.017042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ee190 00:19:00.436 [2024-07-15 18:38:23.017782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.436 [2024-07-15 18:38:23.017813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:00.436 [2024-07-15 18:38:23.025480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e1b48 00:19:00.436 [2024-07-15 18:38:23.026113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.436 [2024-07-15 18:38:23.026143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:00.436 [2024-07-15 18:38:23.036085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7970 00:19:00.436 [2024-07-15 18:38:23.037323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.436 [2024-07-15 18:38:23.037354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:00.436 [2024-07-15 18:38:23.043969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e7818 00:19:00.436 [2024-07-15 18:38:23.045393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.436 [2024-07-15 18:38:23.045426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.053470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e0ea0 00:19:00.695 [2024-07-15 18:38:23.054269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.054302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.061936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e8d30 00:19:00.695 [2024-07-15 18:38:23.062579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.062610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.070751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e7c50 00:19:00.695 [2024-07-15 18:38:23.071647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.071682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.079233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fa7d8 00:19:00.695 [2024-07-15 18:38:23.079997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.080027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.088132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e84c0 00:19:00.695 [2024-07-15 18:38:23.088654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.088685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.098307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fef90 00:19:00.695 [2024-07-15 18:38:23.099450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.099483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.106772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e73e0 00:19:00.695 [2024-07-15 18:38:23.107800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.107833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.116836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fbcf0 00:19:00.695 [2024-07-15 18:38:23.118325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.118356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.123145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ec408 00:19:00.695 [2024-07-15 18:38:23.123923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.123953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.133773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7970 00:19:00.695 [2024-07-15 18:38:23.135047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.135077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.140094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fac10 00:19:00.695 [2024-07-15 18:38:23.140645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.140673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.150725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eaab8 00:19:00.695 [2024-07-15 18:38:23.151662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.151694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.159136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fd640 00:19:00.695 [2024-07-15 18:38:23.159941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.159970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.169734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e1710 00:19:00.695 [2024-07-15 18:38:23.171163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.171194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.176048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e6738 00:19:00.695 [2024-07-15 18:38:23.176633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.176663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:00.695 [2024-07-15 18:38:23.187227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e1b48 00:19:00.695 [2024-07-15 18:38:23.188542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.695 [2024-07-15 18:38:23.188587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.193277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e49b0 00:19:00.696 [2024-07-15 18:38:23.193852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.193883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.202281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ec408 00:19:00.696 [2024-07-15 18:38:23.202849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.202879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.210969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190efae0 00:19:00.696 [2024-07-15 18:38:23.211527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.211557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.221774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ec840 00:19:00.696 [2024-07-15 18:38:23.222711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.222742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.229931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ed920 00:19:00.696 [2024-07-15 18:38:23.230737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.230767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.239065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190dece0 00:19:00.696 [2024-07-15 18:38:23.239662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.239692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.247480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ed0b0 00:19:00.696 [2024-07-15 18:38:23.247942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.247972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.257653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fc560 00:19:00.696 [2024-07-15 18:38:23.258711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.258741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.265486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eb760 00:19:00.696 [2024-07-15 18:38:23.266858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.266887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.275014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fd208 00:19:00.696 [2024-07-15 18:38:23.275750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.275780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.283827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eee38 00:19:00.696 [2024-07-15 18:38:23.284760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.284790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.292763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f1ca0 00:19:00.696 [2024-07-15 18:38:23.293456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.293491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:00.696 [2024-07-15 18:38:23.301436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e6738 00:19:00.696 [2024-07-15 18:38:23.302395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.696 [2024-07-15 18:38:23.302428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.310099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f31b8 00:19:00.955 [2024-07-15 18:38:23.310933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.310963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.318923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f3a28 00:19:00.955 [2024-07-15 18:38:23.319499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.319531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.327413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fb8b8 00:19:00.955 [2024-07-15 18:38:23.327907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.327938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.335378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f31b8 00:19:00.955 [2024-07-15 18:38:23.335965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.335994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.344457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eaab8 00:19:00.955 [2024-07-15 18:38:23.345050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.345079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.354935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f8618 00:19:00.955 [2024-07-15 18:38:23.355659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.355689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.363722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f6890 00:19:00.955 [2024-07-15 18:38:23.364681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.364711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.372144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ddc00 00:19:00.955 [2024-07-15 18:38:23.372987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.373017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.380970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f4b08 00:19:00.955 [2024-07-15 18:38:23.381931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.381961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.389930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e4de8 00:19:00.955 [2024-07-15 18:38:23.390516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.390547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.399261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e5ec8 00:19:00.955 [2024-07-15 18:38:23.399980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.400010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.407908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ddc00 00:19:00.955 [2024-07-15 18:38:23.408870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.408901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.416575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fac10 00:19:00.955 [2024-07-15 18:38:23.417419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.417449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.424981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e01f8 00:19:00.955 [2024-07-15 18:38:23.425705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.425735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.433414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f3a28 00:19:00.955 [2024-07-15 18:38:23.434030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.434060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.443964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fd208 00:19:00.955 [2024-07-15 18:38:23.445179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.445209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.452207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190de038 00:19:00.955 [2024-07-15 18:38:23.453214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.453245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.460910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e8d30 00:19:00.955 [2024-07-15 18:38:23.461978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.462006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.470089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fe2e8 00:19:00.955 [2024-07-15 18:38:23.471092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.471122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.479561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f8e88 00:19:00.955 [2024-07-15 18:38:23.480682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.480712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.487379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e3d08 00:19:00.955 [2024-07-15 18:38:23.488806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.488835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.496853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eea00 00:19:00.955 [2024-07-15 18:38:23.497637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.497668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.504977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f3e60 00:19:00.955 [2024-07-15 18:38:23.506384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.506416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.512656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fb048 00:19:00.955 [2024-07-15 18:38:23.513271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.513300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.521621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e2c28 00:19:00.955 [2024-07-15 18:38:23.522229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.522259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.530232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e8088 00:19:00.955 [2024-07-15 18:38:23.530843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.530873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.540836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ff3c8 00:19:00.955 [2024-07-15 18:38:23.541819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.955 [2024-07-15 18:38:23.541849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:00.955 [2024-07-15 18:38:23.550609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190efae0 00:19:00.955 [2024-07-15 18:38:23.551957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.956 [2024-07-15 18:38:23.551988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:00.956 [2024-07-15 18:38:23.559014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fc128 00:19:00.956 [2024-07-15 18:38:23.560259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.956 [2024-07-15 18:38:23.560291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:00.956 [2024-07-15 18:38:23.567069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ed0b0 00:19:01.214 [2024-07-15 18:38:23.568054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.568088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.575730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fa7d8 00:19:01.214 [2024-07-15 18:38:23.576594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.576624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.585775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7da8 00:19:01.214 [2024-07-15 18:38:23.587121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.587150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.595048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ecc78 00:19:01.214 [2024-07-15 18:38:23.596518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.596552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.601343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eea00 00:19:01.214 [2024-07-15 18:38:23.601959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.601990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.611545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ed0b0 00:19:01.214 [2024-07-15 18:38:23.612315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.612349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.619959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fdeb0 00:19:01.214 [2024-07-15 18:38:23.620578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.620607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.629143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fb048 00:19:01.214 [2024-07-15 18:38:23.629887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.629917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.637610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f0788 00:19:01.214 [2024-07-15 18:38:23.638255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.638286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.648055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f3e60 00:19:01.214 [2024-07-15 18:38:23.649512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.649538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.654360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f46d0 00:19:01.214 [2024-07-15 18:38:23.655102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.655132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.663353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7100 00:19:01.214 [2024-07-15 18:38:23.664091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.664120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.671795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e5a90 00:19:01.214 [2024-07-15 18:38:23.672420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.214 [2024-07-15 18:38:23.672449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:01.214 [2024-07-15 18:38:23.680472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e27f0 00:19:01.215 [2024-07-15 18:38:23.681106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.681135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.689739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f6458 00:19:01.215 [2024-07-15 18:38:23.690476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.690504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.700382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e5a90 00:19:01.215 [2024-07-15 18:38:23.701638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.701667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.708650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f3a28 00:19:01.215 [2024-07-15 18:38:23.709659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.709690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.717279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e1b48 00:19:01.215 [2024-07-15 18:38:23.718180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.718210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.725666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e4578 00:19:01.215 [2024-07-15 18:38:23.726552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.726587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.734664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190df550 00:19:01.215 [2024-07-15 18:38:23.735556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.735589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.743080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f31b8 00:19:01.215 [2024-07-15 18:38:23.743877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.743906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.751720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eb760 00:19:01.215 [2024-07-15 18:38:23.752502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.752531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.760705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f6458 00:19:01.215 [2024-07-15 18:38:23.761479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.761507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.769363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f5be8 00:19:01.215 [2024-07-15 18:38:23.770140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.770169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.778350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ea680 00:19:01.215 [2024-07-15 18:38:23.779122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.779151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.788264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fda78 00:19:01.215 [2024-07-15 18:38:23.789039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.789069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.797079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ebfd0 00:19:01.215 [2024-07-15 18:38:23.798088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.798118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.805511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7da8 00:19:01.215 [2024-07-15 18:38:23.806408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.806438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.814321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190de470 00:19:01.215 [2024-07-15 18:38:23.814967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.814997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:01.215 [2024-07-15 18:38:23.822782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f96f8 00:19:01.215 [2024-07-15 18:38:23.823344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.215 [2024-07-15 18:38:23.823374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.833676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eff18 00:19:01.474 [2024-07-15 18:38:23.835167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.835196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.839978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f7da8 00:19:01.474 [2024-07-15 18:38:23.840753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.840781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.848858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e5a90 00:19:01.474 [2024-07-15 18:38:23.849268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.849299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.858976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f2d80 00:19:01.474 [2024-07-15 18:38:23.860006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.860036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.867393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e4140 00:19:01.474 [2024-07-15 18:38:23.868309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.868341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.875790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ea248 00:19:01.474 [2024-07-15 18:38:23.876585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.876616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.884636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e2c28 00:19:01.474 [2024-07-15 18:38:23.885184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.885217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.893078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e95a0 00:19:01.474 [2024-07-15 18:38:23.893542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.893586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.903520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fcdd0 00:19:01.474 [2024-07-15 18:38:23.904814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.904845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.909876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eb760 00:19:01.474 [2024-07-15 18:38:23.910450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.910479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.918920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ed920 00:19:01.474 [2024-07-15 18:38:23.919481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.919511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.929338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eee38 00:19:01.474 [2024-07-15 18:38:23.930383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.930414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.938603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ea248 00:19:01.474 [2024-07-15 18:38:23.939781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.939813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.947603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190eff18 00:19:01.474 [2024-07-15 18:38:23.948777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.948807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.954946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f6890 00:19:01.474 [2024-07-15 18:38:23.955752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.955782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.965554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fe720 00:19:01.474 [2024-07-15 18:38:23.966868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.966898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.974598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fd208 00:19:01.474 [2024-07-15 18:38:23.975911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.975943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.980719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e9e10 00:19:01.474 [2024-07-15 18:38:23.981289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.981318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:23.989716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f6cc8 00:19:01.474 [2024-07-15 18:38:23.990279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:23.990307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:24.000174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190ed0b0 00:19:01.474 [2024-07-15 18:38:24.000877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:24.000908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:24.008936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190f46d0 00:19:01.474 [2024-07-15 18:38:24.009894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:24.009924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:24.018988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190fc560 00:19:01.474 [2024-07-15 18:38:24.020417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:24.020447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:01.474 [2024-07-15 18:38:24.027709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112c880) with pdu=0x2000190e95a0 00:19:01.474 [2024-07-15 18:38:24.029026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.474 [2024-07-15 18:38:24.029054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:01.474 00:19:01.474 Latency(us) 00:19:01.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.474 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:01.475 nvme0n1 : 2.00 28719.67 112.19 0.00 0.00 4452.29 1829.22 12054.41 00:19:01.475 =================================================================================================================== 00:19:01.475 Total : 28719.67 112.19 0.00 0.00 4452.29 1829.22 12054.41 00:19:01.475 0 00:19:01.475 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:01.475 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:01.475 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:01.475 | .driver_specific 00:19:01.475 | .nvme_error 00:19:01.475 | .status_code 00:19:01.475 | .command_transient_transport_error' 00:19:01.475 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:01.732 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 225 > 0 )) 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93231 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93231 ']' 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93231 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93231 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:01.733 killing process with pid 93231 00:19:01.733 Received shutdown signal, test time was about 2.000000 seconds 00:19:01.733 00:19:01.733 Latency(us) 00:19:01.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.733 =================================================================================================================== 00:19:01.733 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93231' 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93231 00:19:01.733 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93231 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93321 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93321 /var/tmp/bperf.sock 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93321 ']' 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:01.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.991 18:38:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:01.991 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:01.991 Zero copy mechanism will not be used. 00:19:01.991 [2024-07-15 18:38:24.522535] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:19:01.991 [2024-07-15 18:38:24.522625] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93321 ] 00:19:02.249 [2024-07-15 18:38:24.662325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.249 [2024-07-15 18:38:24.748256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.814 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.814 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:02.814 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:02.814 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:03.071 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:03.071 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.071 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:03.071 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.071 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:03.071 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:03.328 nvme0n1 00:19:03.328 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:03.328 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.328 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:03.328 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.328 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:03.328 18:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:03.586 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:03.586 Zero copy mechanism will not be used. 00:19:03.586 Running I/O for 2 seconds... 00:19:03.586 [2024-07-15 18:38:25.995122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:25.995534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:25.995562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:25.999189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:25.999588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:25.999618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.003091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.003475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.003503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.007119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.007497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.007526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.011114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.011497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.011529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.015020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.015396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.015430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.019097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.019498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.019526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.023120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.023517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.023550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.027108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.027514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.027546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.031130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.031533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.031576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.035162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.035527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.035563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.039173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.039562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.039599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.043099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.043483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.043515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.046967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.047341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.047370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.050921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.051291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.051320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.054785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.055162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.055189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.058704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.059074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.059105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.062639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.063012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.063039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.066513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.066887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.066915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.070401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.070765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.070799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.074315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.074702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.074732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.078241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.078626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.078645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.082180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.082555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.082590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.086114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.086484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.086514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.090082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.090418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.090437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.093803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.094128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.094161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.097441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.097774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.097806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.586 [2024-07-15 18:38:26.101222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.586 [2024-07-15 18:38:26.101576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.586 [2024-07-15 18:38:26.101602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.104898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.105229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.105260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.108563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.108915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.108941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.112260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.112610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.112637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.115953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.116280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.116307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.119650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.119991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.120023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.123302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.123654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.123680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.127032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.127360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.127386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.130776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.131124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.131150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.134532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.134877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.134900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.138262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.138605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.138624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.141987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.142312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.142339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.145651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.145991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.146021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.149324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.149674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.149700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.153147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.153482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.153509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.156821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.157156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.157181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.160493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.160840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.160867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.164190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.164521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.164554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.167911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.168289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.171656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.172004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.172035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.175385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.175738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.175765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.179133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.179479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.179508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.182908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.183246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.183272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.186587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.186933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.186958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.190259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.190613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.190633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.194031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.194358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.194376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.587 [2024-07-15 18:38:26.197686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.587 [2024-07-15 18:38:26.198038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.587 [2024-07-15 18:38:26.198067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.845 [2024-07-15 18:38:26.201416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.845 [2024-07-15 18:38:26.201765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.845 [2024-07-15 18:38:26.201790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.845 [2024-07-15 18:38:26.205190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.845 [2024-07-15 18:38:26.205527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.845 [2024-07-15 18:38:26.205547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.208912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.209241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.209267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.212586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.212910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.212935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.216311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.216664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.216694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.220047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.220391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.220424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.223782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.224124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.224160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.227478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.227830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.227856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.231139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.231469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.231495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.234796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.235136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.235161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.238538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.238895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.238926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.242272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.242617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.242640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.245947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.246288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.246307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.249599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.249926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.249949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.253280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.253612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.253634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.256924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.257247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.257273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.260409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.260742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.260769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.263799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.264096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.264121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.267204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.267506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.267532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.270673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.270949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.270975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.274094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.274390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.274413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.277480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.277774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.277794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.280860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.281145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.281163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.284284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.284592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.284610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.287636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.287932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.287957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.291096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.291399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.291425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.294513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.294813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.294831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.297955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.298247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.298272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.301319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.301629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.301648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.304771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.846 [2024-07-15 18:38:26.305062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.846 [2024-07-15 18:38:26.305080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.846 [2024-07-15 18:38:26.308200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.308497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.308525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.311582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.311879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.311904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.315005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.315297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.315326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.318420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.318721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.318740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.321859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.322161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.322184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.325169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.325469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.325487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.328652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.328944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.328962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.332097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.332384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.332410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.335504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.335815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.335841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.338889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.339187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.339221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.342360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.342645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.342664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.345755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.346049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.346071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.349155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.349451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.349474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.352583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.352869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.352898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.355960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.356251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.356278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.359288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.359584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.359610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.362669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.362966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.362991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.366159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.366454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.366477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.369583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.369881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.369917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.373004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.373303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.373331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.376492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.376802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.376828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.380018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.380315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.380342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.383398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.383691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.383716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.386764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.387056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.387079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.390189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.390484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.390502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.393638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.393928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.393953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.397050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.397339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.397370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.400537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.400824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.400862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.403882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.847 [2024-07-15 18:38:26.404183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.847 [2024-07-15 18:38:26.404219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.847 [2024-07-15 18:38:26.407305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.407597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.407626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.410708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.411008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.411033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.414159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.414457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.414493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.417580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.417878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.417908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.421105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.421397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.421429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.424594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.424879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.424909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.428035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.428318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.428347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.431428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.431734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.431764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.434791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.435086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.435125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.438242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.438522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.438550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.441683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.441982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.442011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.445134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.445430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.445462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.448555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.448864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.448894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.451999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.452295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.452326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:03.848 [2024-07-15 18:38:26.455367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:03.848 [2024-07-15 18:38:26.455669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.848 [2024-07-15 18:38:26.455699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.458749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.459042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.459073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.462205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.462505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.462536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.465657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.465947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.465977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.469084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.469370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.469399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.472518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.472810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.472851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.475978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.476277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.476308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.479343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.479650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.479680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.482759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.483053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.483083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.486241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.486544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.486588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.489674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.489968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.489997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.493169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.493461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.493492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.496645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.496928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.496952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.500052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.500356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.500387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.503557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.503878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.503907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.506940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.507251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.507283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.510364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.510667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.510692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.513815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.514104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.514132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.517244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.517534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.517562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.520620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.520930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.520955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.524041] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.524347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.524378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.527432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.527741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.527768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.530772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.531056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.531086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.533956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.534228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.534254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.537091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.537368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.537394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.540300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.540563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.540600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.543460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.543756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.543786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.546676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.546951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.546981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.549837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.550111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.550142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.552987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.553271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.553298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.556221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.556494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.556525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.559423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.559711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.559742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.562666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.562934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.562964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.565909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.566185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.566211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.569129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.569410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.569441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.572330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.572628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.572653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.575561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.575853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.575883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.578812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.579091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.579123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.582011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.108 [2024-07-15 18:38:26.582288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.108 [2024-07-15 18:38:26.582317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.108 [2024-07-15 18:38:26.585159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.585432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.585458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.588293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.588561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.588605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.591477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.591753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.591782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.594664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.594934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.594963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.597871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.598148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.598178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.601019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.601294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.601322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.604250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.604528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.604555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.607459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.607753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.607784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.610643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.610903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.610930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.613848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.614132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.614158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.617100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.617374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.617403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.620240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.620514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.620540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.623444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.623722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.623753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.626614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.626888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.626926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.629820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.630094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.630122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.633078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.633353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.633381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.636211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.636479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.636510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.639411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.639703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.639733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.642556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.642834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.642858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.645797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.646060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.646086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.649052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.649316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.649342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.652266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.652554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.652589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.655500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.655789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.655819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.658710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.658985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.659014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.661927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.662214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.662240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.665137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.665415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.665442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.668474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.668789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.668821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.671805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.672086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.672117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.675046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.675325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.675350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.678259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.678523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.678549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.681488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.681780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.681804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.684605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.684887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.684913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.687826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.688103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.688134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.690963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.691236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.691253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.694080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.694360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.694378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.697313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.697600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.697619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.700546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.700841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.700867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.703799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.704082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.704109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.706991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.707271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.707289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.710173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.710455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.710473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.713339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.713629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.713655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.109 [2024-07-15 18:38:26.716578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.109 [2024-07-15 18:38:26.716857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.109 [2024-07-15 18:38:26.716884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.719780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.720059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.720085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.722977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.723246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.723264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.726155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.726432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.726450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.729328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.729622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.729648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.732537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.732839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.732864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.735730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.736005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.736031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.738861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.739132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.739150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.742046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.742328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.742345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.745189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.745469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.745487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.748394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.748684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.748703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.751551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.751823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.751844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.754693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.754945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.754963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.757816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.758089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.758106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.760948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.761225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.761242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.764157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.369 [2024-07-15 18:38:26.764432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.369 [2024-07-15 18:38:26.764459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.369 [2024-07-15 18:38:26.767347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.767636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.767656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.770509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.770790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.770809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.773614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.773884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.773902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.776812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.777096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.777114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.780059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.780337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.780364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.783329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.783609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.783632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.786456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.786741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.786759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.789633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.789900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.789918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.792810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.793078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.793096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.796003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.796287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.796314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.799220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.799501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.799526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.802438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.802719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.802737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.805582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.805845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.805863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.808705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.808985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.809007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.811916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.812201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.812228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.815061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.815349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.815377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.818279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.818554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.818583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.821369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.821651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.821669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.824538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.824813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.824839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.827666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.827948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.827973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.830920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.831206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.831239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.834071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.834348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.834366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.837256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.837527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.837545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.840468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.840747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.840775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.843618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.843895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.843921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.846792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.847064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.847091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.849970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.850227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.850254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.853163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.853442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.853460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.856386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.856666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.856692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.859616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.859888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.859913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.862736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.863011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.863037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.865901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.866166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.866183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.869003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.869282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.869300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.872141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.872417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.872447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.875284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.875560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.875595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.878359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.878639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.878657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.881513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.881798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.881816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.884757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.885010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.885028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.887935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.888219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.888246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.891150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.891440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.891465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.894324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.894610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.894629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.897537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.897825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.897843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.900703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.900983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.901001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.903864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.904138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.904164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.907016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.907292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.907318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.910172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.910432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.910450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.913255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.913536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.913554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.916482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.916776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.916794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.919589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.370 [2024-07-15 18:38:26.919864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.370 [2024-07-15 18:38:26.919888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.370 [2024-07-15 18:38:26.922831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.923082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.923110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.925907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.926162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.926180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.929033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.929263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.929281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.932151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.932386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.932418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.935260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.935496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.935522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.938348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.938601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.938620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.941288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.941374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.941393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.944392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.944484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.944503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.947580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.947705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.947724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.950731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.950831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.950849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.953914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.953997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.954015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.957016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.957108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.957127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.960195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.960283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.960301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.963387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.963476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.963495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.966504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.966642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.966661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.969633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.969715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.969734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.972757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.972902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.972921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.975918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.976011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.976029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.371 [2024-07-15 18:38:26.979083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.371 [2024-07-15 18:38:26.979169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.371 [2024-07-15 18:38:26.979189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:26.982279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:26.982390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:26.982408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:26.985428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:26.985513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:26.985531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:26.988608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:26.988698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:26.988717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:26.991747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:26.991838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:26.991856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:26.994864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:26.994990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:26.995008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:26.998037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:26.998154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:26.998173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.001177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.001319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.001351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.004354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.004441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.004460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.007531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.007680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.007699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.010647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.010735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.010754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.013789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.013873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.013892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.016987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.017072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.017090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.020133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.020262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.020280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.023313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.023418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.023436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.026472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.026617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.026636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.029660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.029784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.029802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.032774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.032898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.032917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.035913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.036050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.036068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.039245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.039463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.039626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.042548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.042784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-15 18:38:27.042893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.629 [2024-07-15 18:38:27.045780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.629 [2024-07-15 18:38:27.045971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.046115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.049012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.049202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.049332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.052337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.052500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.052520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.055674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.055735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.055754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.058852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.058915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.058934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.062008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.062065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.062084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.065167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.065228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.065246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.068358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.068440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.068459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.071514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.071595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.071614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.074676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.074769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.074788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.077814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.077880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.077898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.080952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.081025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.081043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.084143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.084205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.084224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.087448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.087645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.087777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.090766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.090932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.090952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.094062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.094127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.094147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.097251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.097314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.097332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.100427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.100489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.100508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.103629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.103709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.103728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.106743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.106806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.106825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.109893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.109971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.109989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.113111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.113180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.113199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.116244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.116325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.116343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.119422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.119486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.119504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.122596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.122658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.122677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.125640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.125758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.125777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.128775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.128838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.128857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.131944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.132007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.132027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.135128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.135195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.135224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.138448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.138655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.138786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.141718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.141882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.141901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.145169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.145347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.145477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.148522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.148726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.148869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.151904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.152160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.152283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.155150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.155392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.155520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.158342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.158525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.158646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.161668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.161744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.161763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.164847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.164904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.164923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.167994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.168118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.168137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.171097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.171158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.171177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.174246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.174332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.174350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.177385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.177463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.177481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.180519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.180614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.180633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.183718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.183779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.183798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.186847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.186908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.186927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.190005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.190084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.190104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.193132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.193230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.193249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.196270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.196348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.196367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.199362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.199450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.630 [2024-07-15 18:38:27.199468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.630 [2024-07-15 18:38:27.202488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.630 [2024-07-15 18:38:27.202589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.202608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.205678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.205786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.205804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.208809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.208878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.208897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.211974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.212045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.212063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.215090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.215170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.215189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.218517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.218736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.218903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.221861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.222053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.222072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.225168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.225244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.225263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.228318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.228389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.228408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.231482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.231538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.231557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.234659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.234756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.234775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.237829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.237891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.237910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.631 [2024-07-15 18:38:27.240993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.631 [2024-07-15 18:38:27.241056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-15 18:38:27.241075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.244178] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.244258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.244277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.247316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.247373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.247392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.250495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.250584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.250603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.253694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.253770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.253788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.256894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.256976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.256995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.260039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.260108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.260127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.263253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.263335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.263354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.266373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.266435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.266454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.269542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.269659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.269678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.272680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.272769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.272787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.275829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.275910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.275929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.278946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.279013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.279031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.282123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.282186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.282204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.285289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.285352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.285371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.288528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.288606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.288626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.291659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.291739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.291758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.294815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.294872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.294890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.298019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.298085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.298104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.301215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.301302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.301321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.304394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.304475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.304494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.307585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.307642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.307662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.310712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.310775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.310793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.313888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.313966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.313985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.317002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.317084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.317103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.890 [2024-07-15 18:38:27.320193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.890 [2024-07-15 18:38:27.320256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.890 [2024-07-15 18:38:27.320275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.323363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.323445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.323463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.326468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.326532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.326551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.329650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.329728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.329746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.332800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.332879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.332898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.335977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.336042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.336060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.339096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.339153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.339173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.342450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.342639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.342783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.345769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.345947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.345967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.349085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.349167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.349186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.352242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.352323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.352342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.355418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.355502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.355521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.358662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.358718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.358737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.361806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.361886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.361905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.364958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.365017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.365036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.368151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.368209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.368227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.371331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.371405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.371423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.374578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.374636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.374654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.377749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.377813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.377832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.380886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.380949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.380967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.384025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.384090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.384109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.387234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.387311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.387330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.390378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.390439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.390457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.393604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.393690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.393709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.396768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.396835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.396854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.399925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.400008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.400027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.403110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.403192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.403223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.406296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.406379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.406398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.409428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.409495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.409515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.412606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.891 [2024-07-15 18:38:27.412693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.891 [2024-07-15 18:38:27.412712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.891 [2024-07-15 18:38:27.415694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.415764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.415783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.418893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.418974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.418992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.422114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.422183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.422201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.425281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.425363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.425381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.428673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.428880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.429013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.431974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.432181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.432324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.435318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.435519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.435672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.438676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.438872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.438994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.441988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.442177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.442312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.445314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.445519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.445660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.448730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.448909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.448929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.452059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.452140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.452159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.455228] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.455307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.455326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.458393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.458473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.458492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.461555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.461643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.461662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.464719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.464778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.464797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.467904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.467962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.467981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.471050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.471121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.471140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.474211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.474268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.474286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.477372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.477433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.477451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.480513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.480597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.480615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.483681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.483739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.483758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.486851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.486909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.486927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.489962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.490022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.490040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.493141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.493204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.493223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.496352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.496435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.496454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.892 [2024-07-15 18:38:27.499550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:04.892 [2024-07-15 18:38:27.499643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.892 [2024-07-15 18:38:27.499661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.502705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.502772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.502790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.505877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.505969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.505988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.509052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.509131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.509150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.512204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.512274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.512293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.515387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.515452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.515470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.518534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.518601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.518621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.521694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.521760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.521778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.524896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.524954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.524973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.528032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.528117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.528136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.531200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.531299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.531317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.534384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.534442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.534460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.537602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.537661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.537680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.540730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.540815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.540833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.543855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.543940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.543958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.547030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.547111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.547131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.550228] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.550285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.550304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.553332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.553438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.553456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.556703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.556886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.556909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.559983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.560068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.560087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.563108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.563173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.563192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.566371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.566553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.566705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.569679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.569743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.151 [2024-07-15 18:38:27.569763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.151 [2024-07-15 18:38:27.572837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.151 [2024-07-15 18:38:27.572902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.572921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.575963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.576049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.576068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.579144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.579253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.579272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.582487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.582719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.582906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.585793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.585995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.586302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.589212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.589493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.589641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.592470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.592745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.592875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.595779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.595984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.596135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.599056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.599273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.599394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.602407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.602600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.602620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.605659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.605742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.605761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.608865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.608926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.608945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.612056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.612108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.612127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.615194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.615266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.615286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.618338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.618467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.618486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.621494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.621629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.621649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.624669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.624730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.624748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.627833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.627916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.627934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.630938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.631016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.631035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.634087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.634209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.634227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.637291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.637372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.637392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.640461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.640540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.640559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.643542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.643627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.643646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.646683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.646787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.646806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.649832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.650033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.650051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.652964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.653127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.653145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.656099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.656215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.656234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.659281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.152 [2024-07-15 18:38:27.659381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.152 [2024-07-15 18:38:27.659399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.152 [2024-07-15 18:38:27.662500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.662639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.662657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.665690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.665823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.665841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.668853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.668936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.668955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.671993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.672082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.672101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.675172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.675292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.675311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.678307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.678438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.678457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.681501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.681648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.681668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.684681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.684794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.684813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.687844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.687985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.688004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.690984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.691070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.691089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.694177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.694281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.694300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.697368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.697486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.697504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.700474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.700598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.700617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.703646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.703800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.703819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.706770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.706926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.706945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.710001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.710125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.710143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.713302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.713420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.713439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.716479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.716621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.716640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.719624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.719758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.719776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.722751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.722904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.722922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.725924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.726012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.726031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.729090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.729215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.729234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.732272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.732358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.732377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.735417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.735551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.735582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.738588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.738735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.738753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.741756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.741891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.741909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.744934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.745059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.745078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.748060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.748191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.748209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.751251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.153 [2024-07-15 18:38:27.751342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.153 [2024-07-15 18:38:27.751361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.153 [2024-07-15 18:38:27.754457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.154 [2024-07-15 18:38:27.754562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.154 [2024-07-15 18:38:27.754594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.154 [2024-07-15 18:38:27.757638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.154 [2024-07-15 18:38:27.757785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.154 [2024-07-15 18:38:27.757804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.154 [2024-07-15 18:38:27.760743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.154 [2024-07-15 18:38:27.760904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.154 [2024-07-15 18:38:27.760923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.412 [2024-07-15 18:38:27.763846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.412 [2024-07-15 18:38:27.764001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.412 [2024-07-15 18:38:27.764020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.412 [2024-07-15 18:38:27.767000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.412 [2024-07-15 18:38:27.767155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.412 [2024-07-15 18:38:27.767174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.412 [2024-07-15 18:38:27.770130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.412 [2024-07-15 18:38:27.770301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.412 [2024-07-15 18:38:27.770319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.412 [2024-07-15 18:38:27.773299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.412 [2024-07-15 18:38:27.773453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.412 [2024-07-15 18:38:27.773472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.412 [2024-07-15 18:38:27.776420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.412 [2024-07-15 18:38:27.776580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.412 [2024-07-15 18:38:27.776599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.412 [2024-07-15 18:38:27.779531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.412 [2024-07-15 18:38:27.779712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.412 [2024-07-15 18:38:27.779731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.412 [2024-07-15 18:38:27.782698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.412 [2024-07-15 18:38:27.782859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.412 [2024-07-15 18:38:27.782877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.412 [2024-07-15 18:38:27.785838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.412 [2024-07-15 18:38:27.786017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.412 [2024-07-15 18:38:27.786037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.412 [2024-07-15 18:38:27.788962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.412 [2024-07-15 18:38:27.789138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.412 [2024-07-15 18:38:27.789157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.412 [2024-07-15 18:38:27.792161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.792303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.792322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.795345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.795493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.795511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.798518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.798668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.798687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.801715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.801882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.801901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.804902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.805045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.805064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.808076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.808222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.808241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.811243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.811393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.811411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.814372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.814540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.814559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.817533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.817691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.817709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.820731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.820876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.820894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.823940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.824078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.824097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.827023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.827191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.827220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.830172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.830336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.830355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.833329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.833472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.833491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.836539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.836710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.836729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.839656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.839821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.839840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.842838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.842984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.843003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.845957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.846123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.846142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.849070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.849248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.849266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.852275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.852432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.852451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.855432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.855596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.855614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.858542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.858727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.858745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.861733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.861901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.861920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.864935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.865099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.865118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.868130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.868282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.868300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.871429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.871717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.871928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.874721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.874873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.874892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.877858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.878007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.878025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.880918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.881072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.881091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.884062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.884208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.884226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.887239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.887401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.887419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.890401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.890541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.890560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.893500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.893664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.893682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.896653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.896800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.896819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.899806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.899959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.899977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.903007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.903172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.903190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.906139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.906307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.906325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.909269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.909447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.909466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.912422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.912607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.912626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.915610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.915763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.915787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.918718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.918863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.918887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.921952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.922093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.922116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.925080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.925218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.925243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.928277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.928422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.928440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.931469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.931636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.931654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.934663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.934806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.934824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.937784] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.937937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.937956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.940913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.413 [2024-07-15 18:38:27.941070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.413 [2024-07-15 18:38:27.941089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.413 [2024-07-15 18:38:27.944132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.944286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.944304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.947297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.947441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.947460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.950453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.950615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.950634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.953626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.953762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.953780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.956824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.956948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.956967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.959948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.960077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.960095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.963196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.963336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.963355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.966375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.966499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.966518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.969589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.969740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.969759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.972791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.972910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.972928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.975961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.976084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.976102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.979003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.979113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.979132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.982213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.982324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.982342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.985386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.985463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.985482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:05.414 [2024-07-15 18:38:27.988395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112cbc0) with pdu=0x2000190fef90 00:19:05.414 [2024-07-15 18:38:27.988455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.414 [2024-07-15 18:38:27.988474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:05.414 00:19:05.414 Latency(us) 00:19:05.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.414 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:05.414 nvme0n1 : 2.00 9416.95 1177.12 0.00 0.00 1695.65 1138.33 4105.87 00:19:05.414 =================================================================================================================== 00:19:05.414 Total : 9416.95 1177.12 0.00 0.00 1695.65 1138.33 4105.87 00:19:05.414 0 00:19:05.414 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:05.414 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:05.414 | .driver_specific 00:19:05.414 | .nvme_error 00:19:05.414 | .status_code 00:19:05.414 | .command_transient_transport_error' 00:19:05.414 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:05.414 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 608 > 0 )) 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93321 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93321 ']' 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93321 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93321 00:19:05.671 killing process with pid 93321 00:19:05.671 Received shutdown signal, test time was about 2.000000 seconds 00:19:05.671 00:19:05.671 Latency(us) 00:19:05.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.671 =================================================================================================================== 00:19:05.671 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93321' 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93321 00:19:05.671 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93321 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93015 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93015 ']' 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93015 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93015 00:19:05.928 killing process with pid 93015 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93015' 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93015 00:19:05.928 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93015 00:19:06.184 ************************************ 00:19:06.184 END TEST nvmf_digest_error 00:19:06.184 ************************************ 00:19:06.184 00:19:06.184 real 0m17.192s 00:19:06.184 user 0m31.388s 00:19:06.184 sys 0m4.995s 00:19:06.184 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:06.184 18:38:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:06.184 18:38:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:06.184 18:38:28 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:06.184 18:38:28 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:06.184 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:06.184 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:06.441 rmmod nvme_tcp 00:19:06.441 rmmod nvme_fabrics 00:19:06.441 rmmod nvme_keyring 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93015 ']' 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93015 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93015 ']' 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93015 00:19:06.441 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93015) - No such process 00:19:06.441 Process with pid 93015 is not found 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93015 is not found' 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:06.441 00:19:06.441 real 0m35.417s 00:19:06.441 user 1m3.379s 00:19:06.441 sys 0m10.318s 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:06.441 18:38:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:06.441 ************************************ 00:19:06.441 END TEST nvmf_digest 00:19:06.441 ************************************ 00:19:06.441 18:38:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:06.441 18:38:28 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:19:06.441 18:38:28 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:19:06.441 18:38:28 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:06.441 18:38:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:06.441 18:38:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.441 18:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.441 ************************************ 00:19:06.441 START TEST nvmf_mdns_discovery 00:19:06.441 ************************************ 00:19:06.441 18:38:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:06.699 * Looking for test storage... 00:19:06.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.699 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:06.700 Cannot find device "nvmf_tgt_br" 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.700 Cannot find device "nvmf_tgt_br2" 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:06.700 Cannot find device "nvmf_tgt_br" 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:06.700 Cannot find device "nvmf_tgt_br2" 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:19:06.700 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:06.959 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.240 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:07.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:19:07.241 00:19:07.241 --- 10.0.0.2 ping statistics --- 00:19:07.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.241 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:07.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:19:07.241 00:19:07.241 --- 10.0.0.3 ping statistics --- 00:19:07.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.241 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:19:07.241 00:19:07.241 --- 10.0.0.1 ping statistics --- 00:19:07.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.241 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=93612 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 93612 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 93612 ']' 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.241 18:38:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.241 [2024-07-15 18:38:29.740188] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:19:07.241 [2024-07-15 18:38:29.740403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.499 [2024-07-15 18:38:29.883754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.499 [2024-07-15 18:38:29.974759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.499 [2024-07-15 18:38:29.974946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.499 [2024-07-15 18:38:29.975040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.499 [2024-07-15 18:38:29.975084] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.499 [2024-07-15 18:38:29.975109] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.499 [2024-07-15 18:38:29.975154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.064 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.064 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:08.064 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.064 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.064 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 [2024-07-15 18:38:30.781735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 [2024-07-15 18:38:30.793806] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 null0 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 null1 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 null2 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 null3 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=93662 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 93662 /tmp/host.sock 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 93662 ']' 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:08.323 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.323 18:38:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.323 [2024-07-15 18:38:30.905688] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:19:08.323 [2024-07-15 18:38:30.905884] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93662 ] 00:19:08.582 [2024-07-15 18:38:31.037126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.582 [2024-07-15 18:38:31.125413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.519 18:38:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.519 18:38:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:09.519 18:38:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:19:09.519 18:38:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:19:09.519 18:38:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:19:09.519 18:38:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=93691 00:19:09.519 18:38:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:19:09.519 18:38:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:19:09.519 18:38:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:19:09.519 Process 982 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:19:09.519 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:19:09.519 Successfully dropped root privileges. 00:19:09.519 avahi-daemon 0.8 starting up. 00:19:10.474 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:19:10.475 Successfully called chroot(). 00:19:10.475 Successfully dropped remaining capabilities. 00:19:10.475 No service file found in /etc/avahi/services. 00:19:10.475 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:10.475 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:19:10.475 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:10.475 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:19:10.475 Network interface enumeration completed. 00:19:10.475 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:19:10.475 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:19:10.475 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:19:10.475 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:19:10.475 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 3783337768. 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:10.475 18:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.475 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:19:10.752 [2024-07-15 18:38:33.218211] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:10.752 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.753 [2024-07-15 18:38:33.255108] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.753 [2024-07-15 18:38:33.298996] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.753 [2024-07-15 18:38:33.306952] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.753 18:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:19:11.687 [2024-07-15 18:38:34.116754] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:12.253 [2024-07-15 18:38:34.715793] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:12.253 [2024-07-15 18:38:34.715828] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:12.253 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:12.253 cookie is 0 00:19:12.253 is_local: 1 00:19:12.253 our_own: 0 00:19:12.253 wide_area: 0 00:19:12.253 multicast: 1 00:19:12.253 cached: 1 00:19:12.253 [2024-07-15 18:38:34.815636] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:12.253 [2024-07-15 18:38:34.815669] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:12.253 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:12.253 cookie is 0 00:19:12.253 is_local: 1 00:19:12.253 our_own: 0 00:19:12.253 wide_area: 0 00:19:12.253 multicast: 1 00:19:12.253 cached: 1 00:19:12.253 [2024-07-15 18:38:34.815681] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:12.510 [2024-07-15 18:38:34.915471] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:12.510 [2024-07-15 18:38:34.915496] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:12.510 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:12.510 cookie is 0 00:19:12.510 is_local: 1 00:19:12.510 our_own: 0 00:19:12.510 wide_area: 0 00:19:12.510 multicast: 1 00:19:12.510 cached: 1 00:19:12.510 [2024-07-15 18:38:35.015319] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:12.510 [2024-07-15 18:38:35.015354] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:12.510 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:12.510 cookie is 0 00:19:12.510 is_local: 1 00:19:12.510 our_own: 0 00:19:12.510 wide_area: 0 00:19:12.510 multicast: 1 00:19:12.510 cached: 1 00:19:12.510 [2024-07-15 18:38:35.015367] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:13.443 [2024-07-15 18:38:35.727396] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:13.443 [2024-07-15 18:38:35.727432] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:13.443 [2024-07-15 18:38:35.727447] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:13.443 [2024-07-15 18:38:35.813390] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:19:13.443 [2024-07-15 18:38:35.870152] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:13.443 [2024-07-15 18:38:35.870180] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:13.443 [2024-07-15 18:38:35.916982] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:13.443 [2024-07-15 18:38:35.917016] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:13.443 [2024-07-15 18:38:35.917029] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:13.443 [2024-07-15 18:38:36.002968] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:19:13.700 [2024-07-15 18:38:36.058765] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:13.700 [2024-07-15 18:38:36.058807] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:19:16.224 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.225 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.225 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.225 18:38:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:19:17.155 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:19:17.155 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:17.155 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:17.155 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.155 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.155 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.155 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.412 [2024-07-15 18:38:39.839830] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:17.412 [2024-07-15 18:38:39.840162] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:17.412 [2024-07-15 18:38:39.840189] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:17.412 [2024-07-15 18:38:39.840218] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:17.412 [2024-07-15 18:38:39.840229] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.412 [2024-07-15 18:38:39.851811] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:17.412 [2024-07-15 18:38:39.852157] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:17.412 [2024-07-15 18:38:39.852197] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.412 18:38:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:19:17.412 [2024-07-15 18:38:39.982031] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:19:17.412 [2024-07-15 18:38:39.983016] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:19:17.669 [2024-07-15 18:38:40.044223] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:17.669 [2024-07-15 18:38:40.044263] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:17.669 [2024-07-15 18:38:40.044270] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:17.669 [2024-07-15 18:38:40.044288] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:17.669 [2024-07-15 18:38:40.045091] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:17.669 [2024-07-15 18:38:40.045106] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:17.669 [2024-07-15 18:38:40.045111] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:17.669 [2024-07-15 18:38:40.045124] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:17.670 [2024-07-15 18:38:40.089936] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:17.670 [2024-07-15 18:38:40.089971] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:17.670 [2024-07-15 18:38:40.090920] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:17.670 [2024-07-15 18:38:40.090932] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:18.603 18:38:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.603 [2024-07-15 18:38:41.155402] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:18.603 [2024-07-15 18:38:41.155439] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:18.603 [2024-07-15 18:38:41.155468] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:18.603 [2024-07-15 18:38:41.155479] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.603 [2024-07-15 18:38:41.160223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.603 [2024-07-15 18:38:41.160251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.603 [2024-07-15 18:38:41.160263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.603 [2024-07-15 18:38:41.160273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.603 [2024-07-15 18:38:41.160283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.603 [2024-07-15 18:38:41.160291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.603 [2024-07-15 18:38:41.160301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.603 [2024-07-15 18:38:41.160310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.603 [2024-07-15 18:38:41.160318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.603 [2024-07-15 18:38:41.167407] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:18.603 [2024-07-15 18:38:41.167459] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:18.603 [2024-07-15 18:38:41.170170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.603 [2024-07-15 18:38:41.171812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.603 [2024-07-15 18:38:41.171836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.603 [2024-07-15 18:38:41.171847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.603 [2024-07-15 18:38:41.171857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.603 [2024-07-15 18:38:41.171866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.603 [2024-07-15 18:38:41.171875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.603 [2024-07-15 18:38:41.171885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.603 [2024-07-15 18:38:41.171894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.603 [2024-07-15 18:38:41.171903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.603 18:38:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:19:18.603 [2024-07-15 18:38:41.180171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.603 [2024-07-15 18:38:41.180278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.603 [2024-07-15 18:38:41.180294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.603 [2024-07-15 18:38:41.180304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.603 [2024-07-15 18:38:41.180320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.603 [2024-07-15 18:38:41.180333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.603 [2024-07-15 18:38:41.180342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.603 [2024-07-15 18:38:41.180352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.603 [2024-07-15 18:38:41.180380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.604 [2024-07-15 18:38:41.181761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.604 [2024-07-15 18:38:41.190210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.604 [2024-07-15 18:38:41.190301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.604 [2024-07-15 18:38:41.190318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.604 [2024-07-15 18:38:41.190327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.604 [2024-07-15 18:38:41.190340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.604 [2024-07-15 18:38:41.190365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.604 [2024-07-15 18:38:41.190375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.604 [2024-07-15 18:38:41.190384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.604 [2024-07-15 18:38:41.190396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.604 [2024-07-15 18:38:41.191756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.604 [2024-07-15 18:38:41.191832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.604 [2024-07-15 18:38:41.191847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.604 [2024-07-15 18:38:41.191858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.604 [2024-07-15 18:38:41.191871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.604 [2024-07-15 18:38:41.191883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.604 [2024-07-15 18:38:41.191892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.604 [2024-07-15 18:38:41.191902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.604 [2024-07-15 18:38:41.191914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.604 [2024-07-15 18:38:41.200248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.604 [2024-07-15 18:38:41.200332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.604 [2024-07-15 18:38:41.200348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.604 [2024-07-15 18:38:41.200358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.604 [2024-07-15 18:38:41.200372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.604 [2024-07-15 18:38:41.200398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.604 [2024-07-15 18:38:41.200407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.604 [2024-07-15 18:38:41.200416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.604 [2024-07-15 18:38:41.200427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.604 [2024-07-15 18:38:41.201785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.604 [2024-07-15 18:38:41.201845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.604 [2024-07-15 18:38:41.201859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.604 [2024-07-15 18:38:41.201867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.604 [2024-07-15 18:38:41.201879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.604 [2024-07-15 18:38:41.201890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.604 [2024-07-15 18:38:41.201898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.604 [2024-07-15 18:38:41.201907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.604 [2024-07-15 18:38:41.201917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.604 [2024-07-15 18:38:41.210284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.604 [2024-07-15 18:38:41.210355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.604 [2024-07-15 18:38:41.210370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.604 [2024-07-15 18:38:41.210379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.604 [2024-07-15 18:38:41.210391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.604 [2024-07-15 18:38:41.210416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.604 [2024-07-15 18:38:41.210425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.604 [2024-07-15 18:38:41.210433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.604 [2024-07-15 18:38:41.210444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.604 [2024-07-15 18:38:41.211809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.604 [2024-07-15 18:38:41.211880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.604 [2024-07-15 18:38:41.211896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.604 [2024-07-15 18:38:41.211905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.604 [2024-07-15 18:38:41.211918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.604 [2024-07-15 18:38:41.211930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.604 [2024-07-15 18:38:41.211939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.604 [2024-07-15 18:38:41.211948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.604 [2024-07-15 18:38:41.211959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.862 [2024-07-15 18:38:41.220339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.862 [2024-07-15 18:38:41.220489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.862 [2024-07-15 18:38:41.220509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.862 [2024-07-15 18:38:41.220521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.862 [2024-07-15 18:38:41.220551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.862 [2024-07-15 18:38:41.220564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.862 [2024-07-15 18:38:41.220584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.862 [2024-07-15 18:38:41.220594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.862 [2024-07-15 18:38:41.220606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.862 [2024-07-15 18:38:41.221839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.862 [2024-07-15 18:38:41.221910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.862 [2024-07-15 18:38:41.221925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.862 [2024-07-15 18:38:41.221934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.862 [2024-07-15 18:38:41.221947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.862 [2024-07-15 18:38:41.221960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.862 [2024-07-15 18:38:41.221968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.862 [2024-07-15 18:38:41.221977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.862 [2024-07-15 18:38:41.221989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.862 [2024-07-15 18:38:41.230425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.862 [2024-07-15 18:38:41.230590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.862 [2024-07-15 18:38:41.230618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.862 [2024-07-15 18:38:41.230631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.862 [2024-07-15 18:38:41.230647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.862 [2024-07-15 18:38:41.230660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.862 [2024-07-15 18:38:41.230670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.862 [2024-07-15 18:38:41.230680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.862 [2024-07-15 18:38:41.230692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.862 [2024-07-15 18:38:41.231867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.862 [2024-07-15 18:38:41.231949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.862 [2024-07-15 18:38:41.231964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.862 [2024-07-15 18:38:41.231975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.862 [2024-07-15 18:38:41.231988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.862 [2024-07-15 18:38:41.232001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.862 [2024-07-15 18:38:41.232009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.862 [2024-07-15 18:38:41.232019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.862 [2024-07-15 18:38:41.232030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.862 [2024-07-15 18:38:41.240483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.862 [2024-07-15 18:38:41.240581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.862 [2024-07-15 18:38:41.240597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.862 [2024-07-15 18:38:41.240607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.862 [2024-07-15 18:38:41.240621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.862 [2024-07-15 18:38:41.240633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.240641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.240650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.863 [2024-07-15 18:38:41.240662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.241901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.863 [2024-07-15 18:38:41.241961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.241974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.863 [2024-07-15 18:38:41.241982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.241995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.242006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.242014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.242023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.863 [2024-07-15 18:38:41.242034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.250516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.863 [2024-07-15 18:38:41.250609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.250626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.863 [2024-07-15 18:38:41.250635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.250650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.250662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.250670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.250679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.863 [2024-07-15 18:38:41.250692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.251925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.863 [2024-07-15 18:38:41.251991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.252006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.863 [2024-07-15 18:38:41.252016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.252028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.252041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.252049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.252058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.863 [2024-07-15 18:38:41.252069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.260554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.863 [2024-07-15 18:38:41.260660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.260677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.863 [2024-07-15 18:38:41.260687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.260701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.260713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.260721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.260731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.863 [2024-07-15 18:38:41.260743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.261951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.863 [2024-07-15 18:38:41.262010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.262023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.863 [2024-07-15 18:38:41.262032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.262044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.262055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.262064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.262072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.863 [2024-07-15 18:38:41.262083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.270605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.863 [2024-07-15 18:38:41.270690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.270706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.863 [2024-07-15 18:38:41.270715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.270728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.270740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.270749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.270757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.863 [2024-07-15 18:38:41.270768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.271975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.863 [2024-07-15 18:38:41.272042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.272056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.863 [2024-07-15 18:38:41.272066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.272078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.272090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.272099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.272108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.863 [2024-07-15 18:38:41.272119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.280639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.863 [2024-07-15 18:38:41.280714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.280729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.863 [2024-07-15 18:38:41.280738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.280750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.280762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.280770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.280780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.863 [2024-07-15 18:38:41.280791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.282000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.863 [2024-07-15 18:38:41.282059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.282072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.863 [2024-07-15 18:38:41.282081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.282093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.282104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.282112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.282121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.863 [2024-07-15 18:38:41.282131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.290669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.863 [2024-07-15 18:38:41.290752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.290767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6653a0 with addr=10.0.0.2, port=4420 00:19:18.863 [2024-07-15 18:38:41.290777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6653a0 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.290790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6653a0 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.290801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:18.863 [2024-07-15 18:38:41.290810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:18.863 [2024-07-15 18:38:41.290819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:18.863 [2024-07-15 18:38:41.290830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.863 [2024-07-15 18:38:41.292023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:18.863 [2024-07-15 18:38:41.292088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.863 [2024-07-15 18:38:41.292103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61e360 with addr=10.0.0.3, port=4420 00:19:18.863 [2024-07-15 18:38:41.292112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61e360 is same with the state(5) to be set 00:19:18.863 [2024-07-15 18:38:41.292125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e360 (9): Bad file descriptor 00:19:18.863 [2024-07-15 18:38:41.292137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:18.864 [2024-07-15 18:38:41.292145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:18.864 [2024-07-15 18:38:41.292154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:18.864 [2024-07-15 18:38:41.292166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.864 [2024-07-15 18:38:41.297390] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:19:18.864 [2024-07-15 18:38:41.297414] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:18.864 [2024-07-15 18:38:41.297445] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:18.864 [2024-07-15 18:38:41.298400] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:18.864 [2024-07-15 18:38:41.298421] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:18.864 [2024-07-15 18:38:41.298436] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:18.864 [2024-07-15 18:38:41.383331] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:18.864 [2024-07-15 18:38:41.384310] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.799 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.800 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.057 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:20.057 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:20.057 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:19:20.057 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:20.057 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.057 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.057 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.057 18:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:19:20.057 [2024-07-15 18:38:42.503229] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.994 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.252 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:21.253 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.253 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:21.253 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.253 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.253 [2024-07-15 18:38:43.680043] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:19:21.253 2024/07/15 18:38:43 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:21.253 request: 00:19:21.253 { 00:19:21.253 "method": "bdev_nvme_start_mdns_discovery", 00:19:21.253 "params": { 00:19:21.253 "name": "mdns", 00:19:21.253 "svcname": "_nvme-disc._http", 00:19:21.253 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:21.253 } 00:19:21.253 } 00:19:21.253 Got JSON-RPC error response 00:19:21.253 GoRPCClient: error on JSON-RPC call 00:19:21.253 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:21.253 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:21.253 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:21.253 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:21.253 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:21.253 18:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:19:21.817 [2024-07-15 18:38:44.263742] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:21.817 [2024-07-15 18:38:44.363558] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:22.074 [2024-07-15 18:38:44.463419] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:22.074 [2024-07-15 18:38:44.463466] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:22.074 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:22.074 cookie is 0 00:19:22.074 is_local: 1 00:19:22.074 our_own: 0 00:19:22.074 wide_area: 0 00:19:22.074 multicast: 1 00:19:22.074 cached: 1 00:19:22.074 [2024-07-15 18:38:44.563271] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:22.074 [2024-07-15 18:38:44.563310] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:22.074 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:22.074 cookie is 0 00:19:22.074 is_local: 1 00:19:22.074 our_own: 0 00:19:22.074 wide_area: 0 00:19:22.074 multicast: 1 00:19:22.074 cached: 1 00:19:22.074 [2024-07-15 18:38:44.563324] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:22.074 [2024-07-15 18:38:44.663099] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:22.074 [2024-07-15 18:38:44.663136] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:22.074 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:22.074 cookie is 0 00:19:22.074 is_local: 1 00:19:22.074 our_own: 0 00:19:22.074 wide_area: 0 00:19:22.074 multicast: 1 00:19:22.074 cached: 1 00:19:22.332 [2024-07-15 18:38:44.762945] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:22.332 [2024-07-15 18:38:44.762984] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:22.332 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:22.332 cookie is 0 00:19:22.332 is_local: 1 00:19:22.332 our_own: 0 00:19:22.332 wide_area: 0 00:19:22.332 multicast: 1 00:19:22.332 cached: 1 00:19:22.332 [2024-07-15 18:38:44.762998] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:22.939 [2024-07-15 18:38:45.473024] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:22.939 [2024-07-15 18:38:45.473065] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:22.939 [2024-07-15 18:38:45.473080] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:23.196 [2024-07-15 18:38:45.559009] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:19:23.196 [2024-07-15 18:38:45.618942] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:23.196 [2024-07-15 18:38:45.618984] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:23.196 [2024-07-15 18:38:45.672667] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:23.196 [2024-07-15 18:38:45.672704] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:23.196 [2024-07-15 18:38:45.672721] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:23.196 [2024-07-15 18:38:45.759663] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:19:23.454 [2024-07-15 18:38:45.819555] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:23.454 [2024-07-15 18:38:45.819613] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:19:26.737 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.738 [2024-07-15 18:38:48.892637] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:19:26.738 2024/07/15 18:38:48 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:26.738 request: 00:19:26.738 { 00:19:26.738 "method": "bdev_nvme_start_mdns_discovery", 00:19:26.738 "params": { 00:19:26.738 "name": "cdc", 00:19:26.738 "svcname": "_nvme-disc._tcp", 00:19:26.738 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:26.738 } 00:19:26.738 } 00:19:26.738 Got JSON-RPC error response 00:19:26.738 GoRPCClient: error on JSON-RPC call 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:26.738 18:38:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.738 [2024-07-15 18:38:49.055977] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 93662 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 93662 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 93691 00:19:26.738 Got SIGTERM, quitting. 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:19:26.738 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:26.738 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:26.738 avahi-daemon 0.8 exiting. 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:26.738 rmmod nvme_tcp 00:19:26.738 rmmod nvme_fabrics 00:19:26.738 rmmod nvme_keyring 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 93612 ']' 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 93612 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 93612 ']' 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 93612 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.738 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93612 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:27.012 killing process with pid 93612 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93612' 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 93612 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 93612 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:27.012 00:19:27.012 real 0m20.619s 00:19:27.012 user 0m39.188s 00:19:27.012 sys 0m2.932s 00:19:27.012 ************************************ 00:19:27.012 END TEST nvmf_mdns_discovery 00:19:27.012 ************************************ 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.012 18:38:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:27.271 18:38:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:27.271 18:38:49 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:19:27.271 18:38:49 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:27.271 18:38:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:27.271 18:38:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.271 18:38:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:27.271 ************************************ 00:19:27.271 START TEST nvmf_host_multipath 00:19:27.271 ************************************ 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:27.271 * Looking for test storage... 00:19:27.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:27.271 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:27.530 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:27.530 Cannot find device "nvmf_tgt_br" 00:19:27.530 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:19:27.530 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.530 Cannot find device "nvmf_tgt_br2" 00:19:27.530 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:19:27.530 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:27.530 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:27.530 Cannot find device "nvmf_tgt_br" 00:19:27.530 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:19:27.530 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:27.530 Cannot find device "nvmf_tgt_br2" 00:19:27.530 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:19:27.530 18:38:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:27.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:27.530 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:27.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:19:27.793 00:19:27.793 --- 10.0.0.2 ping statistics --- 00:19:27.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.793 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:27.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:27.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:19:27.793 00:19:27.793 --- 10.0.0.3 ping statistics --- 00:19:27.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.793 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:27.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:19:27.793 00:19:27.793 --- 10.0.0.1 ping statistics --- 00:19:27.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.793 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94257 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94257 00:19:27.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94257 ']' 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.793 18:38:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:27.793 [2024-07-15 18:38:50.354299] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:19:27.793 [2024-07-15 18:38:50.355044] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.050 [2024-07-15 18:38:50.497168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.050 [2024-07-15 18:38:50.595875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.050 [2024-07-15 18:38:50.596128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.050 [2024-07-15 18:38:50.596288] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.050 [2024-07-15 18:38:50.596348] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.050 [2024-07-15 18:38:50.596374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.050 [2024-07-15 18:38:50.596635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.050 [2024-07-15 18:38:50.596637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.613 18:38:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.613 18:38:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:19:28.613 18:38:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.614 18:38:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:28.614 18:38:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:28.871 18:38:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.871 18:38:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94257 00:19:28.871 18:38:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:28.871 [2024-07-15 18:38:51.447705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.871 18:38:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:29.128 Malloc0 00:19:29.128 18:38:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:29.385 18:38:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:29.642 18:38:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.900 [2024-07-15 18:38:52.324927] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.900 18:38:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:30.167 [2024-07-15 18:38:52.524790] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:30.167 18:38:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94352 00:19:30.167 18:38:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:30.167 18:38:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.167 18:38:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94352 /var/tmp/bdevperf.sock 00:19:30.167 18:38:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94352 ']' 00:19:30.167 18:38:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.167 18:38:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.167 18:38:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.167 18:38:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.167 18:38:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:31.099 18:38:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.099 18:38:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:19:31.099 18:38:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:31.099 18:38:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:31.663 Nvme0n1 00:19:31.663 18:38:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:31.920 Nvme0n1 00:19:31.920 18:38:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:31.920 18:38:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:32.853 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:32.853 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:33.111 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:33.368 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:33.368 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94257 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:33.368 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94438 00:19:33.368 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:39.972 18:39:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:39.972 18:39:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:39.972 Attaching 4 probes... 00:19:39.972 @path[10.0.0.2, 4421]: 22805 00:19:39.972 @path[10.0.0.2, 4421]: 23354 00:19:39.972 @path[10.0.0.2, 4421]: 23219 00:19:39.972 @path[10.0.0.2, 4421]: 23302 00:19:39.972 @path[10.0.0.2, 4421]: 23594 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94438 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94573 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94257 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:39.972 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:46.535 Attaching 4 probes... 00:19:46.535 @path[10.0.0.2, 4420]: 22568 00:19:46.535 @path[10.0.0.2, 4420]: 23065 00:19:46.535 @path[10.0.0.2, 4420]: 23124 00:19:46.535 @path[10.0.0.2, 4420]: 22723 00:19:46.535 @path[10.0.0.2, 4420]: 23134 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94573 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:46.535 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:46.535 18:39:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:46.810 18:39:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:46.810 18:39:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94702 00:19:46.810 18:39:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94257 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:46.810 18:39:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:53.393 Attaching 4 probes... 00:19:53.393 @path[10.0.0.2, 4421]: 17091 00:19:53.393 @path[10.0.0.2, 4421]: 22622 00:19:53.393 @path[10.0.0.2, 4421]: 23079 00:19:53.393 @path[10.0.0.2, 4421]: 23449 00:19:53.393 @path[10.0.0.2, 4421]: 23374 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94702 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94257 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94833 00:19:53.393 18:39:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:59.964 18:39:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:59.964 18:39:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:59.964 Attaching 4 probes... 00:19:59.964 00:19:59.964 00:19:59.964 00:19:59.964 00:19:59.964 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94833 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94964 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:59.964 18:39:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94257 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:06.528 Attaching 4 probes... 00:20:06.528 @path[10.0.0.2, 4421]: 22714 00:20:06.528 @path[10.0.0.2, 4421]: 22849 00:20:06.528 @path[10.0.0.2, 4421]: 22940 00:20:06.528 @path[10.0.0.2, 4421]: 22332 00:20:06.528 @path[10.0.0.2, 4421]: 22657 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94964 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:06.528 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:06.528 [2024-07-15 18:39:29.009215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.528 [2024-07-15 18:39:29.009501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 [2024-07-15 18:39:29.009819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea440 is same with the state(5) to be set 00:20:06.529 18:39:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:07.464 18:39:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:07.464 18:39:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95094 00:20:07.464 18:39:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94257 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:07.464 18:39:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:14.025 Attaching 4 probes... 00:20:14.025 @path[10.0.0.2, 4420]: 21661 00:20:14.025 @path[10.0.0.2, 4420]: 22048 00:20:14.025 @path[10.0.0.2, 4420]: 22027 00:20:14.025 @path[10.0.0.2, 4420]: 22692 00:20:14.025 @path[10.0.0.2, 4420]: 22605 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95094 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:14.025 [2024-07-15 18:39:36.443824] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:14.025 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:14.282 18:39:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:20.850 18:39:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:20.850 18:39:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95291 00:20:20.850 18:39:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94257 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:20.850 18:39:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:26.119 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:26.119 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:26.379 Attaching 4 probes... 00:20:26.379 @path[10.0.0.2, 4421]: 21636 00:20:26.379 @path[10.0.0.2, 4421]: 22326 00:20:26.379 @path[10.0.0.2, 4421]: 22062 00:20:26.379 @path[10.0.0.2, 4421]: 22104 00:20:26.379 @path[10.0.0.2, 4421]: 21904 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95291 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94352 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94352 ']' 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94352 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94352 00:20:26.379 killing process with pid 94352 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94352' 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94352 00:20:26.379 18:39:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94352 00:20:26.662 Connection closed with partial response: 00:20:26.662 00:20:26.662 00:20:26.662 18:39:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94352 00:20:26.662 18:39:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:26.662 [2024-07-15 18:38:52.609489] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:20:26.662 [2024-07-15 18:38:52.609595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94352 ] 00:20:26.662 [2024-07-15 18:38:52.742917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.662 [2024-07-15 18:38:52.843222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.662 Running I/O for 90 seconds... 00:20:26.662 [2024-07-15 18:39:02.527651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.662 [2024-07-15 18:39:02.527709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.527761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.527775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.527793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.527806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.527824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.527836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.527854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.527866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.527884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.527896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.527914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.527926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.527944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.527956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.527973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.527985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.528002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.528014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.528032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.528058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.528076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.528088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.528106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.528118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.662 [2024-07-15 18:39:02.528136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.662 [2024-07-15 18:39:02.528149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.663 [2024-07-15 18:39:02.528179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.663 [2024-07-15 18:39:02.528209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.663 [2024-07-15 18:39:02.528239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.663 [2024-07-15 18:39:02.528269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.663 [2024-07-15 18:39:02.528299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.663 [2024-07-15 18:39:02.528328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.663 [2024-07-15 18:39:02.528358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.663 [2024-07-15 18:39:02.528388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.528785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.528799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.530971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.530989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.531001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.531019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.531031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.531049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.531066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.531084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.531097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.531114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.531127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.531145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.531157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.531175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.531188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.531205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.531218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.531244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.531257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.531275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.663 [2024-07-15 18:39:02.531287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.663 [2024-07-15 18:39:02.531305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.531973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.531991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.532003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.532021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.532034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.532051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.532064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.532082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.532094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:02.532530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.664 [2024-07-15 18:39:02.532549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.664 [2024-07-15 18:39:08.990663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.664 [2024-07-15 18:39:08.990681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.990711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.990741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.990771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.990800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.990831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.990862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.990892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.990923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.990953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.990983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.990995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.665 [2024-07-15 18:39:08.991949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.991969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.991981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.992001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.992014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.992034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.992047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.992067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.992079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.992469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.665 [2024-07-15 18:39:08.992484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.665 [2024-07-15 18:39:08.992505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.666 [2024-07-15 18:39:08.992517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.666 [2024-07-15 18:39:08.992551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.992974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.992994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.666 [2024-07-15 18:39:08.993662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.666 [2024-07-15 18:39:08.993682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.666 [2024-07-15 18:39:08.993694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.993715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:08.993728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.993749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:08.993762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.993782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:08.993795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.993817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:08.993829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:08.994054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:08.994666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:08.994679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.880666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.667 [2024-07-15 18:39:15.880727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.880757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.880771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.880790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.880804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.880823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.880835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.880854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.880867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.880886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.880899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.880917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.880930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.880949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.880962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.880980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.880993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.667 [2024-07-15 18:39:15.881361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.667 [2024-07-15 18:39:15.881379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.881747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.881760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.882927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.668 [2024-07-15 18:39:15.882966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.882985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.668 [2024-07-15 18:39:15.882998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.883018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.668 [2024-07-15 18:39:15.883031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.883050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.668 [2024-07-15 18:39:15.883063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.883082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.668 [2024-07-15 18:39:15.883096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.883115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.668 [2024-07-15 18:39:15.883128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.883147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.668 [2024-07-15 18:39:15.883160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.883179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.668 [2024-07-15 18:39:15.883193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.883212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.668 [2024-07-15 18:39:15.883225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.883256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.883270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.668 [2024-07-15 18:39:15.883289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.668 [2024-07-15 18:39:15.883302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.883968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.883987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.884984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.884997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.885016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.885029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.885048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.885067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.669 [2024-07-15 18:39:15.885085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.669 [2024-07-15 18:39:15.885099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.885974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.885988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.670 [2024-07-15 18:39:15.886253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.670 [2024-07-15 18:39:15.886483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.670 [2024-07-15 18:39:15.886502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.886970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.886983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.887002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.887015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.887034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.887048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.887066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.887079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.887098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.887112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.887131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.887144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.887163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.887176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.887196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.887209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.887228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.887258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.887934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.887960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.887982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.887996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.888015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.888028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.888048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.671 [2024-07-15 18:39:15.888061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.671 [2024-07-15 18:39:15.888080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.888093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.888113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.888126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.888145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.888158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.888177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.888191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.888211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.888225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.888244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.905802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.905896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.905919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.905946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.905983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.672 [2024-07-15 18:39:15.906251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.672 [2024-07-15 18:39:15.906295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.672 [2024-07-15 18:39:15.906339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.672 [2024-07-15 18:39:15.906384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.672 [2024-07-15 18:39:15.906428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.672 [2024-07-15 18:39:15.906472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.672 [2024-07-15 18:39:15.906516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.672 [2024-07-15 18:39:15.906587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.672 [2024-07-15 18:39:15.906632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.906968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.906986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.672 [2024-07-15 18:39:15.907543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.672 [2024-07-15 18:39:15.907583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.907602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.907627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.907646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.907672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.907696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.907723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.907741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.907766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.907785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.907811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.907829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.907855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.907873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.907899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.907916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.907943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.907961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.907986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.908005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.908030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.908049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.908076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.908094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.909957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.909982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.673 [2024-07-15 18:39:15.910001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.673 [2024-07-15 18:39:15.910027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.674 [2024-07-15 18:39:15.910815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.910973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.910991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.674 [2024-07-15 18:39:15.911905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.674 [2024-07-15 18:39:15.911936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.911955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.911980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.911998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.912024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.912042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.912068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.912086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.912113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.912132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.913964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.913993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.914014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.914043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.675 [2024-07-15 18:39:15.914063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.914093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.675 [2024-07-15 18:39:15.914113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.914142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.675 [2024-07-15 18:39:15.914170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.914199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.675 [2024-07-15 18:39:15.914220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.914249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.675 [2024-07-15 18:39:15.914269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.914299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.675 [2024-07-15 18:39:15.914320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.675 [2024-07-15 18:39:15.914349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.675 [2024-07-15 18:39:15.914370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.676 [2024-07-15 18:39:15.914419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.676 [2024-07-15 18:39:15.914469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.676 [2024-07-15 18:39:15.914519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.914583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.914634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.914683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.914733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.914790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.914840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.914890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.914940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.914969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.914990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.915958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.915979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.916009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.916029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.916066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.916087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.916117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.916138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.916993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.917028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.917062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.917083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.917113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.917133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.917163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.917184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.676 [2024-07-15 18:39:15.917213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.676 [2024-07-15 18:39:15.917234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.917963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.917993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.918975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.918995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.919025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.919045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.919075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.677 [2024-07-15 18:39:15.919096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.919125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.919146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.919175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.919196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.919250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.919272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.919302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.919323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.677 [2024-07-15 18:39:15.919352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.677 [2024-07-15 18:39:15.919372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.919969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.919989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.920534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.920555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.921665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.921700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.921735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.921756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.921787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.921820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.921849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.921870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.921899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.921920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.921949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.921969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.921999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.678 [2024-07-15 18:39:15.922599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.678 [2024-07-15 18:39:15.922629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.922650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.922679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.679 [2024-07-15 18:39:15.922700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.922730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.679 [2024-07-15 18:39:15.922750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.922780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.679 [2024-07-15 18:39:15.922801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.922830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.679 [2024-07-15 18:39:15.922851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.922880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.679 [2024-07-15 18:39:15.922901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.922930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.679 [2024-07-15 18:39:15.922951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.922980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.679 [2024-07-15 18:39:15.923000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.679 [2024-07-15 18:39:15.923048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.679 [2024-07-15 18:39:15.923086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.679 [2024-07-15 18:39:15.923923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.679 [2024-07-15 18:39:15.923942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.923956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.923976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.923989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.924979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.924993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.680 [2024-07-15 18:39:15.925704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.680 [2024-07-15 18:39:15.925724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.925742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.925762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.925775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.925795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.925808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.925828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.925841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.925861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.925874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.925894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.925908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.925928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.925941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.925961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.925974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.925994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.681 [2024-07-15 18:39:15.926107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.681 [2024-07-15 18:39:15.926981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.681 [2024-07-15 18:39:15.926995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.927015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.927028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.927757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.927781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.927804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.927819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.927839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.927853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.927872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.927886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.927906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.927920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.927939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.927953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.927972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.927986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.682 [2024-07-15 18:39:15.928459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.682 [2024-07-15 18:39:15.928492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.682 [2024-07-15 18:39:15.928532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.682 [2024-07-15 18:39:15.928576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.682 [2024-07-15 18:39:15.928610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.682 [2024-07-15 18:39:15.928645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.682 [2024-07-15 18:39:15.928678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.682 [2024-07-15 18:39:15.928712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.682 [2024-07-15 18:39:15.928746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.928968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.928982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.682 [2024-07-15 18:39:15.929002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.682 [2024-07-15 18:39:15.929015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.929720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.929734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.683 [2024-07-15 18:39:15.930926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.683 [2024-07-15 18:39:15.930945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.930959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.930978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.930992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.684 [2024-07-15 18:39:15.931759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.684 [2024-07-15 18:39:15.931877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.684 [2024-07-15 18:39:15.931891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.931910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.931924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.931943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.931957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.931977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.931990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.932661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.932675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.685 [2024-07-15 18:39:15.933756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.685 [2024-07-15 18:39:15.933769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.933786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.933799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.933816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.933829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.933847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.933859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.933877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.933890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.933908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.933920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.933942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.933955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.933973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.933985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.686 [2024-07-15 18:39:15.934046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.686 [2024-07-15 18:39:15.934077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.686 [2024-07-15 18:39:15.934107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.686 [2024-07-15 18:39:15.934137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.686 [2024-07-15 18:39:15.934168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.686 [2024-07-15 18:39:15.934198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.686 [2024-07-15 18:39:15.934228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.686 [2024-07-15 18:39:15.934258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.686 [2024-07-15 18:39:15.934289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.686 [2024-07-15 18:39:15.934808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.686 [2024-07-15 18:39:15.934821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.934839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.934851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.934868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.934884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.934901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.934914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.934932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.934944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.934962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.934974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.934992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.935968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.935981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.687 [2024-07-15 18:39:15.936695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.687 [2024-07-15 18:39:15.936707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.936725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.936738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.936756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.936768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.936786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.936798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.936816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.936833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.936850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.936863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.936881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.936893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.936911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.936923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.936941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.936953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.936971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.936984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.688 [2024-07-15 18:39:15.937135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.937898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.937911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.938541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.938563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.688 [2024-07-15 18:39:15.938594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.688 [2024-07-15 18:39:15.938613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.938965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.938989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.689 [2024-07-15 18:39:15.939296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.689 [2024-07-15 18:39:15.939328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.689 [2024-07-15 18:39:15.939360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.689 [2024-07-15 18:39:15.939392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.689 [2024-07-15 18:39:15.939429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.689 [2024-07-15 18:39:15.939461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.689 [2024-07-15 18:39:15.939493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.689 [2024-07-15 18:39:15.939525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.689 [2024-07-15 18:39:15.939558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.689 [2024-07-15 18:39:15.939747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.689 [2024-07-15 18:39:15.939760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.939779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.939792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.939816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.939829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.939847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.939861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.939879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.939893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.939911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.939924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.939943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.939956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.939975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.939988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.940986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.940999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.690 [2024-07-15 18:39:15.941554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.690 [2024-07-15 18:39:15.941581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.941981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.941994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.691 [2024-07-15 18:39:15.942332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.691 [2024-07-15 18:39:15.942830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.691 [2024-07-15 18:39:15.942848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.942861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.942878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.942891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.942908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.942921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.942939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.942951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.942969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.942982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.943004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.943017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.943035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.943047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.943065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.943078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.943799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.943823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.943845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.943859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.943878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.943891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.943910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.943924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.943942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.943955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.943974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.943988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.692 [2024-07-15 18:39:15.944587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.692 [2024-07-15 18:39:15.944619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.692 [2024-07-15 18:39:15.944650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.692 [2024-07-15 18:39:15.944680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.692 [2024-07-15 18:39:15.944711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.692 [2024-07-15 18:39:15.944741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.692 [2024-07-15 18:39:15.944771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.692 [2024-07-15 18:39:15.944801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.692 [2024-07-15 18:39:15.944831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.692 [2024-07-15 18:39:15.944897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.692 [2024-07-15 18:39:15.944915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.944928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.944945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.944958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.944976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.944988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.945625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.945638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.693 [2024-07-15 18:39:15.946519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.693 [2024-07-15 18:39:15.946538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.946978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.946997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.694 [2024-07-15 18:39:15.947584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.694 [2024-07-15 18:39:15.947849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.694 [2024-07-15 18:39:15.947861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.947879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.947892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.947909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.947922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.947940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.947952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.947970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.947983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.948985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.948998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.695 [2024-07-15 18:39:15.949688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.695 [2024-07-15 18:39:15.949719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.695 [2024-07-15 18:39:15.949749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.695 [2024-07-15 18:39:15.949780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.695 [2024-07-15 18:39:15.949797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.695 [2024-07-15 18:39:15.949810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.949828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.696 [2024-07-15 18:39:15.949840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.949862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.696 [2024-07-15 18:39:15.949875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.949893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.696 [2024-07-15 18:39:15.949905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.949923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.696 [2024-07-15 18:39:15.949935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.949953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.696 [2024-07-15 18:39:15.949966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.949984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.949997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.950725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.950738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.696 [2024-07-15 18:39:15.951653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.696 [2024-07-15 18:39:15.951666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.951685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.951698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.951717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.951730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.951749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.951762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.951781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.951795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.951813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.951826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.951845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.951858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.951877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.951890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.951909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.951923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.951947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.951960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.951979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.951992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.697 [2024-07-15 18:39:15.952796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.697 [2024-07-15 18:39:15.952826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.697 [2024-07-15 18:39:15.952843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.952856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.952874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.952886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.952904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.952916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.952934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.952947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.952965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.952978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.952996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.953974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.953996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.698 [2024-07-15 18:39:15.954468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.698 [2024-07-15 18:39:15.954490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.954503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.954538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.954582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.954623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:15.954659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:15.954694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:15.954728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:15.954763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:15.954797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:15.954831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:15.954867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:15.954902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:15.954937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.954972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.954993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:15.955978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:15.955992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:29.009515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:29.009563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:29.009625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.699 [2024-07-15 18:39:29.009640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:29.009660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:29.009698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:29.009717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.699 [2024-07-15 18:39:29.009731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.699 [2024-07-15 18:39:29.009750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.009763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.009782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.009796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.009815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.009828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.009847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.009860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.009878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.009891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.009910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.009923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.009941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.009954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.009973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.009986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.010976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.010995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.011008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.011027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.011041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.011059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.011073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.011092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.700 [2024-07-15 18:39:29.011106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.700 [2024-07-15 18:39:29.011124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.011695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.011717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.701 [2024-07-15 18:39:29.012248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.701 [2024-07-15 18:39:29.012820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.701 [2024-07-15 18:39:29.012834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.012847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.012861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.012875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.012890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.012902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.012917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.012934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.012949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.012962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.012977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.012990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.702 [2024-07-15 18:39:29.013308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.702 [2024-07-15 18:39:29.013323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.703 [2024-07-15 18:39:29.013809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.703 [2024-07-15 18:39:29.013836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.703 [2024-07-15 18:39:29.013864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.703 [2024-07-15 18:39:29.013892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.703 [2024-07-15 18:39:29.013919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.703 [2024-07-15 18:39:29.013946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.703 [2024-07-15 18:39:29.013973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.013988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.703 [2024-07-15 18:39:29.014006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.014020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.703 [2024-07-15 18:39:29.014033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.014105] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x204c500 was disconnected and freed. reset controller. 00:20:26.703 [2024-07-15 18:39:29.014963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:26.703 [2024-07-15 18:39:29.015035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.703 [2024-07-15 18:39:29.015053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.703 [2024-07-15 18:39:29.015084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22184d0 (9): Bad file descriptor 00:20:26.703 [2024-07-15 18:39:29.015398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:26.703 [2024-07-15 18:39:29.015424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22184d0 with addr=10.0.0.2, port=4421 00:20:26.703 [2024-07-15 18:39:29.015439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22184d0 is same with the state(5) to be set 00:20:26.703 [2024-07-15 18:39:29.015545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22184d0 (9): Bad file descriptor 00:20:26.703 [2024-07-15 18:39:29.015665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:26.703 [2024-07-15 18:39:29.015682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:26.703 [2024-07-15 18:39:29.015696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:26.703 [2024-07-15 18:39:29.015795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.703 [2024-07-15 18:39:29.015809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:26.703 [2024-07-15 18:39:39.065835] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:26.703 Received shutdown signal, test time was about 54.558099 seconds 00:20:26.703 00:20:26.703 Latency(us) 00:20:26.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.703 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.703 Verification LBA range: start 0x0 length 0x4000 00:20:26.703 Nvme0n1 : 54.56 9641.74 37.66 0.00 0.00 13256.83 122.55 7115156.67 00:20:26.703 =================================================================================================================== 00:20:26.703 Total : 9641.74 37.66 0.00 0.00 13256.83 122.55 7115156.67 00:20:26.703 18:39:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:26.962 rmmod nvme_tcp 00:20:26.962 rmmod nvme_fabrics 00:20:26.962 rmmod nvme_keyring 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94257 ']' 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94257 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94257 ']' 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94257 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94257 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:26.962 killing process with pid 94257 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94257' 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94257 00:20:26.962 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94257 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:27.221 00:20:27.221 real 1m0.108s 00:20:27.221 user 2m46.407s 00:20:27.221 sys 0m16.961s 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.221 18:39:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:27.221 ************************************ 00:20:27.221 END TEST nvmf_host_multipath 00:20:27.221 ************************************ 00:20:27.479 18:39:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:27.479 18:39:49 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:27.479 18:39:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:27.479 18:39:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.479 18:39:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:27.479 ************************************ 00:20:27.479 START TEST nvmf_timeout 00:20:27.479 ************************************ 00:20:27.479 18:39:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:27.479 * Looking for test storage... 00:20:27.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.479 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:27.480 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:27.762 Cannot find device "nvmf_tgt_br" 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.762 Cannot find device "nvmf_tgt_br2" 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:27.762 Cannot find device "nvmf_tgt_br" 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:27.762 Cannot find device "nvmf_tgt_br2" 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.762 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.762 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:27.762 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:28.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:20:28.021 00:20:28.021 --- 10.0.0.2 ping statistics --- 00:20:28.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.021 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:28.021 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:28.021 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:20:28.021 00:20:28.021 --- 10.0.0.3 ping statistics --- 00:20:28.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.021 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:28.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:20:28.021 00:20:28.021 --- 10.0.0.1 ping statistics --- 00:20:28.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.021 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95612 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95612 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95612 ']' 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:28.021 18:39:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:28.021 [2024-07-15 18:39:50.522969] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:20:28.021 [2024-07-15 18:39:50.523039] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.278 [2024-07-15 18:39:50.665646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:28.278 [2024-07-15 18:39:50.747734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.278 [2024-07-15 18:39:50.747780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.278 [2024-07-15 18:39:50.747790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.278 [2024-07-15 18:39:50.747798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.278 [2024-07-15 18:39:50.747805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.278 [2024-07-15 18:39:50.748019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.278 [2024-07-15 18:39:50.748019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.845 18:39:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.845 18:39:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:28.845 18:39:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.845 18:39:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:28.845 18:39:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:28.845 18:39:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.845 18:39:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.845 18:39:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:29.104 [2024-07-15 18:39:51.611374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.104 18:39:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:29.361 Malloc0 00:20:29.361 18:39:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:29.619 18:39:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:29.877 18:39:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.877 [2024-07-15 18:39:52.432779] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.877 18:39:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=95702 00:20:29.877 18:39:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:29.877 18:39:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 95702 /var/tmp/bdevperf.sock 00:20:29.877 18:39:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95702 ']' 00:20:29.877 18:39:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.877 18:39:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.877 18:39:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.877 18:39:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.877 18:39:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:30.137 [2024-07-15 18:39:52.499816] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:20:30.137 [2024-07-15 18:39:52.499885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95702 ] 00:20:30.137 [2024-07-15 18:39:52.641792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.137 [2024-07-15 18:39:52.730065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.072 18:39:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.072 18:39:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:31.072 18:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:31.072 18:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:31.329 NVMe0n1 00:20:31.329 18:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:31.329 18:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=95745 00:20:31.329 18:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:31.586 Running I/O for 10 seconds... 00:20:32.522 18:39:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:32.522 [2024-07-15 18:39:55.032595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.522 [2024-07-15 18:39:55.032640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.032969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.032989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.522 [2024-07-15 18:39:55.033401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.522 [2024-07-15 18:39:55.033434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.522 [2024-07-15 18:39:55.033465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.522 [2024-07-15 18:39:55.033494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.522 [2024-07-15 18:39:55.033518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.522 [2024-07-15 18:39:55.033530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.033980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.033995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.523 [2024-07-15 18:39:55.034577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.523 [2024-07-15 18:39:55.034612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.523 [2024-07-15 18:39:55.034646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.523 [2024-07-15 18:39:55.034679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.523 [2024-07-15 18:39:55.034704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.523 [2024-07-15 18:39:55.034734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.523 [2024-07-15 18:39:55.034765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.523 [2024-07-15 18:39:55.034798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.523 [2024-07-15 18:39:55.034812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.034824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.034844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.034860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.034877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.034893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.034909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.034920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.034932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.034943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.034959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.034974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.034993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.524 [2024-07-15 18:39:55.035817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.035986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.035999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.036010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.036022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.036036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.036054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.036070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.524 [2024-07-15 18:39:55.036089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.524 [2024-07-15 18:39:55.036104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.525 [2024-07-15 18:39:55.036119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.525 [2024-07-15 18:39:55.036130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.525 [2024-07-15 18:39:55.036142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.525 [2024-07-15 18:39:55.036153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.525 [2024-07-15 18:39:55.036168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.525 [2024-07-15 18:39:55.036182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.525 [2024-07-15 18:39:55.036200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.525 [2024-07-15 18:39:55.036219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.525 [2024-07-15 18:39:55.036237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.525 [2024-07-15 18:39:55.036253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.525 [2024-07-15 18:39:55.036268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf978d0 is same with the state(5) to be set 00:20:32.525 [2024-07-15 18:39:55.036287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:32.525 [2024-07-15 18:39:55.036298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:32.525 [2024-07-15 18:39:55.036312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103320 len:8 PRP1 0x0 PRP2 0x0 00:20:32.525 [2024-07-15 18:39:55.036325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.525 [2024-07-15 18:39:55.036378] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf978d0 was disconnected and freed. reset controller. 00:20:32.525 [2024-07-15 18:39:55.036636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:32.525 [2024-07-15 18:39:55.036711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2a240 (9): Bad file descriptor 00:20:32.525 [2024-07-15 18:39:55.036793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:32.525 [2024-07-15 18:39:55.036809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2a240 with addr=10.0.0.2, port=4420 00:20:32.525 [2024-07-15 18:39:55.036820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a240 is same with the state(5) to be set 00:20:32.525 [2024-07-15 18:39:55.036840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2a240 (9): Bad file descriptor 00:20:32.525 [2024-07-15 18:39:55.036860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:32.525 [2024-07-15 18:39:55.036875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:32.525 [2024-07-15 18:39:55.036892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:32.525 [2024-07-15 18:39:55.036916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:32.525 [2024-07-15 18:39:55.036927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:32.525 18:39:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:34.433 [2024-07-15 18:39:57.033848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:34.433 [2024-07-15 18:39:57.033896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2a240 with addr=10.0.0.2, port=4420 00:20:34.433 [2024-07-15 18:39:57.033909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a240 is same with the state(5) to be set 00:20:34.433 [2024-07-15 18:39:57.033930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2a240 (9): Bad file descriptor 00:20:34.433 [2024-07-15 18:39:57.033953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:34.433 [2024-07-15 18:39:57.033962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:34.433 [2024-07-15 18:39:57.033972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:34.433 [2024-07-15 18:39:57.033994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:34.433 [2024-07-15 18:39:57.034004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:34.719 18:39:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:34.719 18:39:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:34.719 18:39:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:34.719 18:39:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:34.719 18:39:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:34.719 18:39:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:34.719 18:39:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:34.977 18:39:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:34.977 18:39:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:36.879 [2024-07-15 18:39:59.030986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.879 [2024-07-15 18:39:59.031040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2a240 with addr=10.0.0.2, port=4420 00:20:36.879 [2024-07-15 18:39:59.031054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a240 is same with the state(5) to be set 00:20:36.879 [2024-07-15 18:39:59.031077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2a240 (9): Bad file descriptor 00:20:36.879 [2024-07-15 18:39:59.031093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:36.879 [2024-07-15 18:39:59.031103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:36.879 [2024-07-15 18:39:59.031114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:36.879 [2024-07-15 18:39:59.031137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.879 [2024-07-15 18:39:59.031146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.781 [2024-07-15 18:40:01.028032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.781 [2024-07-15 18:40:01.028090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.781 [2024-07-15 18:40:01.028101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.781 [2024-07-15 18:40:01.028110] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:38.781 [2024-07-15 18:40:01.028133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.714 00:20:39.714 Latency(us) 00:20:39.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.714 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:39.714 Verification LBA range: start 0x0 length 0x4000 00:20:39.714 NVMe0n1 : 8.10 1584.70 6.19 15.81 0.00 80161.72 1671.30 7061253.96 00:20:39.714 =================================================================================================================== 00:20:39.714 Total : 1584.70 6.19 15.81 0.00 80161.72 1671.30 7061253.96 00:20:39.714 0 00:20:39.973 18:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:39.973 18:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:39.973 18:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:40.232 18:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:40.232 18:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:40.232 18:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:40.232 18:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:40.490 18:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:40.490 18:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 95745 00:20:40.490 18:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 95702 00:20:40.490 18:40:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95702 ']' 00:20:40.490 18:40:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95702 00:20:40.490 18:40:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:40.490 18:40:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.490 18:40:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95702 00:20:40.490 killing process with pid 95702 00:20:40.490 Received shutdown signal, test time was about 9.057170 seconds 00:20:40.490 00:20:40.490 Latency(us) 00:20:40.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.491 =================================================================================================================== 00:20:40.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.491 18:40:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:40.491 18:40:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:40.491 18:40:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95702' 00:20:40.491 18:40:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95702 00:20:40.491 18:40:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95702 00:20:40.749 18:40:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.749 [2024-07-15 18:40:03.321245] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.749 18:40:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=95903 00:20:40.749 18:40:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:40.749 18:40:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 95903 /var/tmp/bdevperf.sock 00:20:40.749 18:40:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95903 ']' 00:20:40.749 18:40:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.749 18:40:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.750 18:40:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.750 18:40:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.750 18:40:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:41.008 [2024-07-15 18:40:03.392314] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:20:41.008 [2024-07-15 18:40:03.392420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95903 ] 00:20:41.008 [2024-07-15 18:40:03.525859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.008 [2024-07-15 18:40:03.616888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.944 18:40:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.944 18:40:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:41.944 18:40:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:41.944 18:40:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:42.201 NVMe0n1 00:20:42.201 18:40:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=95945 00:20:42.201 18:40:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:42.201 18:40:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:42.458 Running I/O for 10 seconds... 00:20:43.396 18:40:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.396 [2024-07-15 18:40:05.908673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.396 [2024-07-15 18:40:05.908860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.908999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f3b50 is same with the state(5) to be set 00:20:43.397 [2024-07-15 18:40:05.909555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.909984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.909998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.910009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.910021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.910032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.910044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.397 [2024-07-15 18:40:05.910062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.397 [2024-07-15 18:40:05.910075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.910711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.910743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.910776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.910809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.910969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.910984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.911018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.398 [2024-07-15 18:40:05.911043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.911066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.911090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.911117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.911151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.911183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.911215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.911245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.911285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.398 [2024-07-15 18:40:05.911297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.398 [2024-07-15 18:40:05.911309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.911981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.911992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.399 [2024-07-15 18:40:05.912452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.399 [2024-07-15 18:40:05.912504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103520 len:8 PRP1 0x0 PRP2 0x0 00:20:43.399 [2024-07-15 18:40:05.912520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.399 [2024-07-15 18:40:05.912539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.399 [2024-07-15 18:40:05.912550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.399 [2024-07-15 18:40:05.912560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103528 len:8 PRP1 0x0 PRP2 0x0 00:20:43.399 [2024-07-15 18:40:05.912581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.912595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.912607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.912620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103536 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.912634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.912647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.912659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.912673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103544 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.912687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.912702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.912713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.912726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103552 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.912741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.912755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.912768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.912780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103560 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.912796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.912811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.912822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.912834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103568 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.912850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.912865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.912877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.912892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103576 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.912906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.912917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.912926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.912935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103584 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.912948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.912959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.912971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.912985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103592 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103600 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103608 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103616 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103624 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103632 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103640 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103648 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103656 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103664 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103672 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.913529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.913541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.913553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103680 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.913564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.930903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.930942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.930955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103688 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.930968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.930983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.930992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.931002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103696 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.931013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.931025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.931034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.931043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103704 len:8 PRP1 0x0 PRP2 0x0 00:20:43.400 [2024-07-15 18:40:05.931054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.400 [2024-07-15 18:40:05.931066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.400 [2024-07-15 18:40:05.931075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.400 [2024-07-15 18:40:05.931085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103712 len:8 PRP1 0x0 PRP2 0x0 00:20:43.401 [2024-07-15 18:40:05.931096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.401 [2024-07-15 18:40:05.931107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.401 [2024-07-15 18:40:05.931116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.401 [2024-07-15 18:40:05.931125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103720 len:8 PRP1 0x0 PRP2 0x0 00:20:43.401 [2024-07-15 18:40:05.931136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.401 [2024-07-15 18:40:05.931148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.401 [2024-07-15 18:40:05.931157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.401 [2024-07-15 18:40:05.931166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103728 len:8 PRP1 0x0 PRP2 0x0 00:20:43.401 [2024-07-15 18:40:05.931177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.401 [2024-07-15 18:40:05.931188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.401 [2024-07-15 18:40:05.931197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.401 [2024-07-15 18:40:05.931207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103736 len:8 PRP1 0x0 PRP2 0x0 00:20:43.401 [2024-07-15 18:40:05.931218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.401 [2024-07-15 18:40:05.931229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.401 [2024-07-15 18:40:05.931238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.401 [2024-07-15 18:40:05.931247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103744 len:8 PRP1 0x0 PRP2 0x0 00:20:43.401 [2024-07-15 18:40:05.931270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.401 [2024-07-15 18:40:05.931282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.401 [2024-07-15 18:40:05.931290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.401 [2024-07-15 18:40:05.931300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103752 len:8 PRP1 0x0 PRP2 0x0 00:20:43.401 [2024-07-15 18:40:05.931311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.401 [2024-07-15 18:40:05.931378] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20278d0 was disconnected and freed. reset controller. 00:20:43.401 [2024-07-15 18:40:05.931507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.401 [2024-07-15 18:40:05.931525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.401 [2024-07-15 18:40:05.931545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.401 [2024-07-15 18:40:05.931587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.401 [2024-07-15 18:40:05.931606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.401 [2024-07-15 18:40:05.931623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.401 [2024-07-15 18:40:05.931645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.401 [2024-07-15 18:40:05.931666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.401 [2024-07-15 18:40:05.931685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba240 is same with the state(5) to be set 00:20:43.401 [2024-07-15 18:40:05.931975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:43.401 [2024-07-15 18:40:05.932047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fba240 (9): Bad file descriptor 00:20:43.401 [2024-07-15 18:40:05.932224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.401 [2024-07-15 18:40:05.932267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fba240 with addr=10.0.0.2, port=4420 00:20:43.401 [2024-07-15 18:40:05.932292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba240 is same with the state(5) to be set 00:20:43.401 [2024-07-15 18:40:05.932333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fba240 (9): Bad file descriptor 00:20:43.401 [2024-07-15 18:40:05.932369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:43.401 [2024-07-15 18:40:05.932397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:43.401 [2024-07-15 18:40:05.932423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:43.401 [2024-07-15 18:40:05.932464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.401 [2024-07-15 18:40:05.932493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:43.401 18:40:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:44.336 [2024-07-15 18:40:06.931027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:44.336 [2024-07-15 18:40:06.931092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fba240 with addr=10.0.0.2, port=4420 00:20:44.336 [2024-07-15 18:40:06.931105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba240 is same with the state(5) to be set 00:20:44.336 [2024-07-15 18:40:06.931128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fba240 (9): Bad file descriptor 00:20:44.336 [2024-07-15 18:40:06.931144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:44.336 [2024-07-15 18:40:06.931153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:44.336 [2024-07-15 18:40:06.931164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:44.336 [2024-07-15 18:40:06.931186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:44.336 [2024-07-15 18:40:06.931195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:44.336 18:40:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.595 [2024-07-15 18:40:07.120820] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.595 18:40:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 95945 00:20:45.528 [2024-07-15 18:40:07.944824] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:53.672 00:20:53.672 Latency(us) 00:20:53.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.672 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:53.672 Verification LBA range: start 0x0 length 0x4000 00:20:53.672 NVMe0n1 : 10.00 8462.39 33.06 0.00 0.00 15096.94 1546.28 3045502.66 00:20:53.672 =================================================================================================================== 00:20:53.672 Total : 8462.39 33.06 0.00 0.00 15096.94 1546.28 3045502.66 00:20:53.672 0 00:20:53.672 18:40:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96066 00:20:53.672 18:40:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:53.672 18:40:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:53.672 Running I/O for 10 seconds... 00:20:53.672 18:40:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.672 [2024-07-15 18:40:16.013097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.672 [2024-07-15 18:40:16.013340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.013698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c660 is same with the state(5) to be set 00:20:53.673 [2024-07-15 18:40:16.014130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.673 [2024-07-15 18:40:16.014788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.673 [2024-07-15 18:40:16.014801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.014817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.014833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.014850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.014866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.014883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.014898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.014913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.014924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.014936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.014947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.014959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.014970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.014984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.015001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.015029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.015061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.015092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.015124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.015155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.015185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.015214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.015245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.674 [2024-07-15 18:40:16.015282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.015983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.015999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.016016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.016032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.674 [2024-07-15 18:40:16.016046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.674 [2024-07-15 18:40:16.016057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.016080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.016112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.016143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.016175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.016205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.675 [2024-07-15 18:40:16.016951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.016976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.016989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.675 [2024-07-15 18:40:16.017336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.675 [2024-07-15 18:40:16.017347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.676 [2024-07-15 18:40:16.017920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.017958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.676 [2024-07-15 18:40:16.017975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104640 len:8 PRP1 0x0 PRP2 0x0 00:20:53.676 [2024-07-15 18:40:16.017990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.018009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.676 [2024-07-15 18:40:16.018022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.676 [2024-07-15 18:40:16.018035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104648 len:8 PRP1 0x0 PRP2 0x0 00:20:53.676 [2024-07-15 18:40:16.018050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.018109] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2038600 was disconnected and freed. reset controller. 00:20:53.676 [2024-07-15 18:40:16.018203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.676 [2024-07-15 18:40:16.018223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.018239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.676 [2024-07-15 18:40:16.018254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.018270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.676 [2024-07-15 18:40:16.018285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.018300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.676 [2024-07-15 18:40:16.018314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.676 [2024-07-15 18:40:16.018330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba240 is same with the state(5) to be set 00:20:53.676 [2024-07-15 18:40:16.018542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:53.676 [2024-07-15 18:40:16.018585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fba240 (9): Bad file descriptor 00:20:53.676 [2024-07-15 18:40:16.018690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.676 [2024-07-15 18:40:16.018714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fba240 with addr=10.0.0.2, port=4420 00:20:53.676 [2024-07-15 18:40:16.018729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba240 is same with the state(5) to be set 00:20:53.676 [2024-07-15 18:40:16.018753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fba240 (9): Bad file descriptor 00:20:53.676 [2024-07-15 18:40:16.018774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:53.676 [2024-07-15 18:40:16.018787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:53.676 [2024-07-15 18:40:16.018803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:53.676 [2024-07-15 18:40:16.018830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.676 18:40:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:53.676 [2024-07-15 18:40:16.037706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:54.613 [2024-07-15 18:40:17.036244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:54.613 [2024-07-15 18:40:17.036301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fba240 with addr=10.0.0.2, port=4420 00:20:54.613 [2024-07-15 18:40:17.036315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba240 is same with the state(5) to be set 00:20:54.613 [2024-07-15 18:40:17.036336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fba240 (9): Bad file descriptor 00:20:54.613 [2024-07-15 18:40:17.036352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:54.613 [2024-07-15 18:40:17.036360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:54.613 [2024-07-15 18:40:17.036371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:54.613 [2024-07-15 18:40:17.036394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:54.613 [2024-07-15 18:40:17.036404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:55.551 [2024-07-15 18:40:18.034918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.551 [2024-07-15 18:40:18.034983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fba240 with addr=10.0.0.2, port=4420 00:20:55.551 [2024-07-15 18:40:18.034998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba240 is same with the state(5) to be set 00:20:55.551 [2024-07-15 18:40:18.035021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fba240 (9): Bad file descriptor 00:20:55.551 [2024-07-15 18:40:18.035037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:55.551 [2024-07-15 18:40:18.035046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:55.551 [2024-07-15 18:40:18.035057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:55.551 [2024-07-15 18:40:18.035079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:55.551 [2024-07-15 18:40:18.035090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:56.502 [2024-07-15 18:40:19.033900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.502 [2024-07-15 18:40:19.033966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fba240 with addr=10.0.0.2, port=4420 00:20:56.502 [2024-07-15 18:40:19.033981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba240 is same with the state(5) to be set 00:20:56.502 [2024-07-15 18:40:19.034169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fba240 (9): Bad file descriptor 00:20:56.502 [2024-07-15 18:40:19.034363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:56.502 [2024-07-15 18:40:19.034382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:56.502 [2024-07-15 18:40:19.034398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:56.502 18:40:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.502 [2024-07-15 18:40:19.037374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:56.502 [2024-07-15 18:40:19.037412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:56.760 [2024-07-15 18:40:19.237227] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.760 18:40:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96066 00:20:57.696 [2024-07-15 18:40:20.071508] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:02.981 00:21:02.981 Latency(us) 00:21:02.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.981 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:02.981 Verification LBA range: start 0x0 length 0x4000 00:21:02.981 NVMe0n1 : 10.00 7104.45 27.75 5249.55 0.00 10341.53 450.72 3018551.31 00:21:02.981 =================================================================================================================== 00:21:02.981 Total : 7104.45 27.75 5249.55 0.00 10341.53 0.00 3018551.31 00:21:02.981 0 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 95903 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95903 ']' 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95903 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95903 00:21:02.981 killing process with pid 95903 00:21:02.981 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.981 00:21:02.981 Latency(us) 00:21:02.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.981 =================================================================================================================== 00:21:02.981 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95903' 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95903 00:21:02.981 18:40:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95903 00:21:02.981 18:40:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96188 00:21:02.981 18:40:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:02.981 18:40:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96188 /var/tmp/bdevperf.sock 00:21:02.981 18:40:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96188 ']' 00:21:02.981 18:40:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.981 18:40:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.981 18:40:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.981 18:40:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.981 18:40:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:02.981 [2024-07-15 18:40:25.174135] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:21:02.981 [2024-07-15 18:40:25.174739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96188 ] 00:21:02.981 [2024-07-15 18:40:25.315622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.981 [2024-07-15 18:40:25.399244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.548 18:40:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.548 18:40:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:03.548 18:40:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96215 00:21:03.548 18:40:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96188 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:03.548 18:40:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:03.806 18:40:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:04.063 NVMe0n1 00:21:04.063 18:40:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96268 00:21:04.063 18:40:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.063 18:40:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:04.063 Running I/O for 10 seconds... 00:21:04.998 18:40:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.260 [2024-07-15 18:40:27.780278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.260 [2024-07-15 18:40:27.780767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.780992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f870 is same with the state(5) to be set 00:21:05.261 [2024-07-15 18:40:27.781630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.261 [2024-07-15 18:40:27.781659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.261 [2024-07-15 18:40:27.781678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.261 [2024-07-15 18:40:27.781687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.261 [2024-07-15 18:40:27.781698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.261 [2024-07-15 18:40:27.781707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.261 [2024-07-15 18:40:27.781717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.261 [2024-07-15 18:40:27.781726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.261 [2024-07-15 18:40:27.781736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.261 [2024-07-15 18:40:27.781746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.261 [2024-07-15 18:40:27.781756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.261 [2024-07-15 18:40:27.781764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.261 [2024-07-15 18:40:27.781774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.261 [2024-07-15 18:40:27.781782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.261 [2024-07-15 18:40:27.781792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.261 [2024-07-15 18:40:27.781801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.261 [2024-07-15 18:40:27.781810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.261 [2024-07-15 18:40:27.781819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.261 [2024-07-15 18:40:27.781829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.261 [2024-07-15 18:40:27.781837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.781847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.781856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.781865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.781874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.781883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.781892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.781902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.781910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.781920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.781928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.781938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.781946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.781956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.781965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.781976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.781985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.781994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.262 [2024-07-15 18:40:27.782621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.262 [2024-07-15 18:40:27.782631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.782988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.782998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.263 [2024-07-15 18:40:27.783420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.263 [2024-07-15 18:40:27.783430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.783985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.783994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.784003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.784013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.264 [2024-07-15 18:40:27.784023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.784045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:05.264 [2024-07-15 18:40:27.784053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:05.264 [2024-07-15 18:40:27.784060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95288 len:8 PRP1 0x0 PRP2 0x0 00:21:05.264 [2024-07-15 18:40:27.784069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.264 [2024-07-15 18:40:27.784113] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x114d8d0 was disconnected and freed. reset controller. 00:21:05.264 [2024-07-15 18:40:27.784185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.264 [2024-07-15 18:40:27.784197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.265 [2024-07-15 18:40:27.784207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.265 [2024-07-15 18:40:27.784215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.265 [2024-07-15 18:40:27.784224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.265 [2024-07-15 18:40:27.784232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.265 [2024-07-15 18:40:27.784241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.265 [2024-07-15 18:40:27.784250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.265 [2024-07-15 18:40:27.784258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e0240 is same with the state(5) to be set 00:21:05.265 [2024-07-15 18:40:27.784449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:05.265 [2024-07-15 18:40:27.784466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e0240 (9): Bad file descriptor 00:21:05.265 [2024-07-15 18:40:27.784548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.265 [2024-07-15 18:40:27.784561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e0240 with addr=10.0.0.2, port=4420 00:21:05.265 [2024-07-15 18:40:27.784581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e0240 is same with the state(5) to be set 00:21:05.265 [2024-07-15 18:40:27.784595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e0240 (9): Bad file descriptor 00:21:05.265 [2024-07-15 18:40:27.784608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:05.265 [2024-07-15 18:40:27.784617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:05.265 [2024-07-15 18:40:27.784626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:05.265 [2024-07-15 18:40:27.784642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.265 [2024-07-15 18:40:27.784651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:05.265 18:40:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96268 00:21:07.794 [2024-07-15 18:40:29.800406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.794 [2024-07-15 18:40:29.800471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e0240 with addr=10.0.0.2, port=4420 00:21:07.794 [2024-07-15 18:40:29.800485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e0240 is same with the state(5) to be set 00:21:07.794 [2024-07-15 18:40:29.800508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e0240 (9): Bad file descriptor 00:21:07.794 [2024-07-15 18:40:29.800524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:07.794 [2024-07-15 18:40:29.800533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:07.794 [2024-07-15 18:40:29.800543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:07.794 [2024-07-15 18:40:29.800574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.794 [2024-07-15 18:40:29.800585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:09.695 [2024-07-15 18:40:31.797505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.695 [2024-07-15 18:40:31.797574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e0240 with addr=10.0.0.2, port=4420 00:21:09.695 [2024-07-15 18:40:31.797588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e0240 is same with the state(5) to be set 00:21:09.695 [2024-07-15 18:40:31.797612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e0240 (9): Bad file descriptor 00:21:09.695 [2024-07-15 18:40:31.797626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:09.695 [2024-07-15 18:40:31.797635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:09.695 [2024-07-15 18:40:31.797645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:09.695 [2024-07-15 18:40:31.797665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:09.695 [2024-07-15 18:40:31.797675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:11.636 [2024-07-15 18:40:33.794495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:11.636 [2024-07-15 18:40:33.794557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:11.636 [2024-07-15 18:40:33.794573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:11.636 [2024-07-15 18:40:33.794584] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:11.636 [2024-07-15 18:40:33.794606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.202 00:21:12.202 Latency(us) 00:21:12.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.202 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:12.202 NVMe0n1 : 8.12 3211.87 12.55 15.76 0.00 39557.87 1842.38 7061253.96 00:21:12.202 =================================================================================================================== 00:21:12.202 Total : 3211.87 12.55 15.76 0.00 39557.87 1842.38 7061253.96 00:21:12.202 0 00:21:12.202 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:12.202 Attaching 5 probes... 00:21:12.202 1132.281481: reset bdev controller NVMe0 00:21:12.202 1132.330250: reconnect bdev controller NVMe0 00:21:12.202 3148.119532: reconnect delay bdev controller NVMe0 00:21:12.202 3148.137570: reconnect bdev controller NVMe0 00:21:12.202 5145.218891: reconnect delay bdev controller NVMe0 00:21:12.202 5145.240093: reconnect bdev controller NVMe0 00:21:12.202 7142.307072: reconnect delay bdev controller NVMe0 00:21:12.202 7142.328152: reconnect bdev controller NVMe0 00:21:12.202 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:12.202 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:12.202 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96215 00:21:12.202 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96188 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96188 ']' 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96188 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96188 00:21:12.461 killing process with pid 96188 00:21:12.461 Received shutdown signal, test time was about 8.212947 seconds 00:21:12.461 00:21:12.461 Latency(us) 00:21:12.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.461 =================================================================================================================== 00:21:12.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96188' 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96188 00:21:12.461 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96188 00:21:12.461 18:40:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.719 18:40:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:12.719 18:40:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:12.720 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:12.720 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:21:12.720 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.720 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:21:12.720 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.720 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.720 rmmod nvme_tcp 00:21:12.720 rmmod nvme_fabrics 00:21:12.720 rmmod nvme_keyring 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95612 ']' 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95612 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95612 ']' 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95612 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95612 00:21:12.978 killing process with pid 95612 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95612' 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95612 00:21:12.978 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95612 00:21:13.237 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:13.237 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:13.237 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:13.237 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.237 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.237 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.237 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.237 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.237 18:40:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:13.237 00:21:13.237 real 0m45.819s 00:21:13.237 user 2m12.790s 00:21:13.237 sys 0m5.833s 00:21:13.238 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:13.238 18:40:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:13.238 ************************************ 00:21:13.238 END TEST nvmf_timeout 00:21:13.238 ************************************ 00:21:13.238 18:40:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:13.238 18:40:35 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:21:13.238 18:40:35 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:21:13.238 18:40:35 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.238 18:40:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:13.238 18:40:35 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:21:13.238 00:21:13.238 real 14m43.266s 00:21:13.238 user 37m58.083s 00:21:13.238 sys 3m56.275s 00:21:13.238 18:40:35 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:13.238 18:40:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:13.238 ************************************ 00:21:13.238 END TEST nvmf_tcp 00:21:13.238 ************************************ 00:21:13.497 18:40:35 -- common/autotest_common.sh@1142 -- # return 0 00:21:13.497 18:40:35 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:21:13.497 18:40:35 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:13.497 18:40:35 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:13.497 18:40:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.497 18:40:35 -- common/autotest_common.sh@10 -- # set +x 00:21:13.497 ************************************ 00:21:13.497 START TEST spdkcli_nvmf_tcp 00:21:13.497 ************************************ 00:21:13.497 18:40:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:13.497 * Looking for test storage... 00:21:13.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96491 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96491 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 96491 ']' 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.497 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:13.497 [2024-07-15 18:40:36.107716] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:21:13.497 [2024-07-15 18:40:36.107793] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96491 ] 00:21:13.768 [2024-07-15 18:40:36.235551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:13.768 [2024-07-15 18:40:36.325332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.768 [2024-07-15 18:40:36.325334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.702 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.702 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:21:14.702 18:40:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:14.702 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:14.702 18:40:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:14.702 18:40:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:14.702 18:40:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:21:14.702 18:40:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:14.702 18:40:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.702 18:40:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:14.702 18:40:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:14.702 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:14.702 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:14.702 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:14.702 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:14.702 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:14.702 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:14.702 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:14.702 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:14.702 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:14.702 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:14.702 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:14.702 ' 00:21:17.233 [2024-07-15 18:40:39.679674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.611 [2024-07-15 18:40:40.986882] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:21:21.143 [2024-07-15 18:40:43.412824] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:21:23.042 [2024-07-15 18:40:45.507658] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:24.943 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:24.943 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:24.943 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:24.943 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:24.943 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:24.943 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:24.943 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:24.943 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:24.943 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:24.943 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:24.943 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:24.943 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:24.943 18:40:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:24.943 18:40:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:24.943 18:40:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:24.943 18:40:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:24.943 18:40:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.943 18:40:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:24.943 18:40:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:21:24.943 18:40:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:21:25.201 18:40:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:25.201 18:40:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:25.201 18:40:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:25.201 18:40:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:25.201 18:40:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:25.201 18:40:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:25.201 18:40:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.201 18:40:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:25.201 18:40:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:25.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:25.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:25.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:25.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:25.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:25.201 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:25.201 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:25.201 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:25.201 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:25.201 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:25.201 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:25.201 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:25.201 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:25.201 ' 00:21:31.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:31.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:31.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:31.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:31.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:21:31.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:21:31.760 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:31.760 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:31.760 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:31.760 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:31.760 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:31.760 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:31.760 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:31.760 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96491 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96491 ']' 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96491 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96491 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:31.760 killing process with pid 96491 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96491' 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 96491 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 96491 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96491 ']' 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96491 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96491 ']' 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96491 00:21:31.760 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96491) - No such process 00:21:31.760 Process with pid 96491 is not found 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 96491 is not found' 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:31.760 00:21:31.760 real 0m17.673s 00:21:31.760 user 0m38.532s 00:21:31.760 sys 0m1.040s 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:31.760 18:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.760 ************************************ 00:21:31.760 END TEST spdkcli_nvmf_tcp 00:21:31.760 ************************************ 00:21:31.760 18:40:53 -- common/autotest_common.sh@1142 -- # return 0 00:21:31.760 18:40:53 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:31.760 18:40:53 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:31.760 18:40:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.760 18:40:53 -- common/autotest_common.sh@10 -- # set +x 00:21:31.760 ************************************ 00:21:31.760 START TEST nvmf_identify_passthru 00:21:31.760 ************************************ 00:21:31.760 18:40:53 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:31.760 * Looking for test storage... 00:21:31.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:31.761 18:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.761 18:40:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.761 18:40:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.761 18:40:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.761 18:40:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.761 18:40:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.761 18:40:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.761 18:40:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:21:31.761 18:40:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.761 18:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.761 18:40:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.761 18:40:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.761 18:40:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.761 18:40:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.761 18:40:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.761 18:40:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.761 18:40:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:21:31.761 18:40:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.761 18:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.761 18:40:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:31.761 18:40:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:31.761 Cannot find device "nvmf_tgt_br" 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.761 Cannot find device "nvmf_tgt_br2" 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:31.761 Cannot find device "nvmf_tgt_br" 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:31.761 Cannot find device "nvmf_tgt_br2" 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.761 18:40:53 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.761 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.761 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.761 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.761 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.761 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:31.761 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:31.761 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:31.761 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:31.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:21:31.762 00:21:31.762 --- 10.0.0.2 ping statistics --- 00:21:31.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.762 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:31.762 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.762 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:21:31.762 00:21:31.762 --- 10.0.0.3 ping statistics --- 00:21:31.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.762 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:21:31.762 00:21:31.762 --- 10.0.0.1 ping statistics --- 00:21:31.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.762 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:31.762 18:40:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:31.762 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:31.762 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:31.762 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:21:31.762 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:21:31.762 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:21:31.762 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:21:31.762 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:21:31.762 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:21:32.020 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:21:32.020 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:21:32.020 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:21:32.020 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:21:32.279 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:21:32.279 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:21:32.279 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:32.279 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:32.279 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:21:32.279 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:32.279 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:32.279 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=96998 00:21:32.279 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:32.279 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.279 18:40:54 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 96998 00:21:32.279 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 96998 ']' 00:21:32.279 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.279 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.279 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.279 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.279 18:40:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:32.279 [2024-07-15 18:40:54.869554] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:21:32.279 [2024-07-15 18:40:54.869647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.536 [2024-07-15 18:40:55.011867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.536 [2024-07-15 18:40:55.097129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.536 [2024-07-15 18:40:55.097181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.536 [2024-07-15 18:40:55.097191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.536 [2024-07-15 18:40:55.097199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.536 [2024-07-15 18:40:55.097206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.536 [2024-07-15 18:40:55.097413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.536 [2024-07-15 18:40:55.097633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.536 [2024-07-15 18:40:55.098388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.536 [2024-07-15 18:40:55.098389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.132 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.132 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:21:33.132 18:40:55 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:21:33.132 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.132 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.401 18:40:55 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 [2024-07-15 18:40:55.798508] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.401 18:40:55 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 [2024-07-15 18:40:55.811821] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.401 18:40:55 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 18:40:55 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 Nvme0n1 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.401 18:40:55 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.401 18:40:55 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.401 18:40:55 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 [2024-07-15 18:40:55.979674] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.401 18:40:55 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.401 18:40:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 [ 00:21:33.401 { 00:21:33.401 "allow_any_host": true, 00:21:33.401 "hosts": [], 00:21:33.401 "listen_addresses": [], 00:21:33.401 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:33.401 "subtype": "Discovery" 00:21:33.401 }, 00:21:33.401 { 00:21:33.401 "allow_any_host": true, 00:21:33.401 "hosts": [], 00:21:33.401 "listen_addresses": [ 00:21:33.401 { 00:21:33.401 "adrfam": "IPv4", 00:21:33.401 "traddr": "10.0.0.2", 00:21:33.401 "trsvcid": "4420", 00:21:33.401 "trtype": "TCP" 00:21:33.401 } 00:21:33.401 ], 00:21:33.401 "max_cntlid": 65519, 00:21:33.401 "max_namespaces": 1, 00:21:33.401 "min_cntlid": 1, 00:21:33.401 "model_number": "SPDK bdev Controller", 00:21:33.401 "namespaces": [ 00:21:33.401 { 00:21:33.401 "bdev_name": "Nvme0n1", 00:21:33.401 "name": "Nvme0n1", 00:21:33.401 "nguid": "667F5B5DA211440993E56710AD2067EB", 00:21:33.401 "nsid": 1, 00:21:33.401 "uuid": "667f5b5d-a211-4409-93e5-6710ad2067eb" 00:21:33.401 } 00:21:33.401 ], 00:21:33.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.401 "serial_number": "SPDK00000000000001", 00:21:33.401 "subtype": "NVMe" 00:21:33.401 } 00:21:33.401 ] 00:21:33.401 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.401 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:33.401 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:21:33.401 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:21:33.659 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:21:33.659 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:33.659 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:21:33.659 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:21:33.918 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:21:33.918 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:21:33.918 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:21:33.918 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:33.918 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.918 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:33.918 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.918 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:21:33.918 18:40:56 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:21:33.918 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:33.918 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:21:34.176 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.176 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:21:34.176 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.176 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.176 rmmod nvme_tcp 00:21:34.176 rmmod nvme_fabrics 00:21:34.176 rmmod nvme_keyring 00:21:34.176 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:34.176 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:21:34.176 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:21:34.176 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 96998 ']' 00:21:34.176 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 96998 00:21:34.176 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 96998 ']' 00:21:34.176 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 96998 00:21:34.176 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:21:34.176 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.176 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96998 00:21:34.176 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:34.176 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:34.177 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96998' 00:21:34.177 killing process with pid 96998 00:21:34.177 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 96998 00:21:34.177 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 96998 00:21:34.435 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:34.435 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:34.435 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:34.435 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:34.435 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:34.435 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.435 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:34.435 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.435 18:40:56 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:34.435 00:21:34.435 real 0m3.253s 00:21:34.435 user 0m7.307s 00:21:34.435 sys 0m1.069s 00:21:34.435 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.435 18:40:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:34.435 ************************************ 00:21:34.435 END TEST nvmf_identify_passthru 00:21:34.435 ************************************ 00:21:34.435 18:40:56 -- common/autotest_common.sh@1142 -- # return 0 00:21:34.435 18:40:56 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:34.435 18:40:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:34.435 18:40:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.435 18:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:34.435 ************************************ 00:21:34.435 START TEST nvmf_dif 00:21:34.435 ************************************ 00:21:34.435 18:40:56 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:34.693 * Looking for test storage... 00:21:34.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:34.693 18:40:57 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.693 18:40:57 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:34.693 18:40:57 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.693 18:40:57 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.693 18:40:57 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.693 18:40:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.693 18:40:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.694 18:40:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.694 18:40:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:21:34.694 18:40:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:34.694 18:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:21:34.694 18:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:34.694 18:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:34.694 18:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:21:34.694 18:40:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.694 18:40:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:34.694 18:40:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:34.694 Cannot find device "nvmf_tgt_br" 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@155 -- # true 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:34.694 Cannot find device "nvmf_tgt_br2" 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@156 -- # true 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:34.694 Cannot find device "nvmf_tgt_br" 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@158 -- # true 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:34.694 Cannot find device "nvmf_tgt_br2" 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@159 -- # true 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:34.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@162 -- # true 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:34.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@163 -- # true 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:34.694 18:40:57 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:34.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:21:34.952 00:21:34.952 --- 10.0.0.2 ping statistics --- 00:21:34.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.952 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:34.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:34.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:21:34.952 00:21:34.952 --- 10.0.0.3 ping statistics --- 00:21:34.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.952 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:34.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:34.952 00:21:34.952 --- 10.0.0.1 ping statistics --- 00:21:34.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.952 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:34.952 18:40:57 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:35.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:35.518 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:35.518 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:35.518 18:40:58 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.518 18:40:58 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.518 18:40:58 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.518 18:40:58 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.518 18:40:58 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.518 18:40:58 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.518 18:40:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:35.518 18:40:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:21:35.518 18:40:58 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.518 18:40:58 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.518 18:40:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:35.777 18:40:58 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97345 00:21:35.777 18:40:58 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97345 00:21:35.777 18:40:58 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:35.777 18:40:58 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 97345 ']' 00:21:35.777 18:40:58 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.777 18:40:58 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.777 18:40:58 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.777 18:40:58 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.777 18:40:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:35.777 [2024-07-15 18:40:58.186852] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:21:35.777 [2024-07-15 18:40:58.186925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.777 [2024-07-15 18:40:58.328856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.035 [2024-07-15 18:40:58.411789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.035 [2024-07-15 18:40:58.411841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.035 [2024-07-15 18:40:58.411850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.035 [2024-07-15 18:40:58.411858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.035 [2024-07-15 18:40:58.411865] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.035 [2024-07-15 18:40:58.411897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.648 18:40:59 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.648 18:40:59 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:21:36.648 18:40:59 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.648 18:40:59 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.648 18:40:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:36.648 18:40:59 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.648 18:40:59 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:21:36.648 18:40:59 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:36.648 18:40:59 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.648 18:40:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:36.648 [2024-07-15 18:40:59.115267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.648 18:40:59 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.648 18:40:59 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:36.648 18:40:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:36.648 18:40:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.648 18:40:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:36.648 ************************************ 00:21:36.648 START TEST fio_dif_1_default 00:21:36.648 ************************************ 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:36.648 bdev_null0 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:36.648 [2024-07-15 18:40:59.179338] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.648 { 00:21:36.648 "params": { 00:21:36.648 "name": "Nvme$subsystem", 00:21:36.648 "trtype": "$TEST_TRANSPORT", 00:21:36.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.648 "adrfam": "ipv4", 00:21:36.648 "trsvcid": "$NVMF_PORT", 00:21:36.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.648 "hdgst": ${hdgst:-false}, 00:21:36.648 "ddgst": ${ddgst:-false} 00:21:36.648 }, 00:21:36.648 "method": "bdev_nvme_attach_controller" 00:21:36.648 } 00:21:36.648 EOF 00:21:36.648 )") 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:36.648 "params": { 00:21:36.648 "name": "Nvme0", 00:21:36.648 "trtype": "tcp", 00:21:36.648 "traddr": "10.0.0.2", 00:21:36.648 "adrfam": "ipv4", 00:21:36.648 "trsvcid": "4420", 00:21:36.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:36.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:36.648 "hdgst": false, 00:21:36.648 "ddgst": false 00:21:36.648 }, 00:21:36.648 "method": "bdev_nvme_attach_controller" 00:21:36.648 }' 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:36.648 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:36.906 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:36.906 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:36.906 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:36.906 18:40:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.907 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:36.907 fio-3.35 00:21:36.907 Starting 1 thread 00:21:49.107 00:21:49.107 filename0: (groupid=0, jobs=1): err= 0: pid=97431: Mon Jul 15 18:41:09 2024 00:21:49.107 read: IOPS=788, BW=3154KiB/s (3229kB/s)(30.9MiB/10020msec) 00:21:49.107 slat (nsec): min=5629, max=45356, avg=6193.34, stdev=1731.83 00:21:49.107 clat (usec): min=322, max=42041, avg=5055.73, stdev=12969.94 00:21:49.107 lat (usec): min=327, max=42047, avg=5061.92, stdev=12969.91 00:21:49.107 clat percentiles (usec): 00:21:49.107 | 1.00th=[ 326], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 338], 00:21:49.107 | 30.00th=[ 343], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:21:49.107 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[40633], 95.00th=[40633], 00:21:49.107 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:21:49.107 | 99.99th=[42206] 00:21:49.107 bw ( KiB/s): min= 2112, max= 4448, per=100.00%, avg=3157.60, stdev=526.33, samples=20 00:21:49.107 iops : min= 528, max= 1112, avg=789.40, stdev=131.58, samples=20 00:21:49.107 lat (usec) : 500=87.87%, 750=0.43% 00:21:49.107 lat (msec) : 2=0.05%, 50=11.65% 00:21:49.107 cpu : usr=84.19%, sys=15.30%, ctx=16, majf=0, minf=9 00:21:49.107 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:49.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.107 issued rwts: total=7900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.107 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:49.107 00:21:49.107 Run status group 0 (all jobs): 00:21:49.107 READ: bw=3154KiB/s (3229kB/s), 3154KiB/s-3154KiB/s (3229kB/s-3229kB/s), io=30.9MiB (32.4MB), run=10020-10020msec 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.107 00:21:49.107 real 0m11.027s 00:21:49.107 user 0m9.061s 00:21:49.107 sys 0m1.837s 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 ************************************ 00:21:49.107 END TEST fio_dif_1_default 00:21:49.107 ************************************ 00:21:49.107 18:41:10 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:49.107 18:41:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:49.107 18:41:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:49.107 18:41:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 ************************************ 00:21:49.107 START TEST fio_dif_1_multi_subsystems 00:21:49.107 ************************************ 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 bdev_null0 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 [2024-07-15 18:41:10.259677] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 bdev_null1 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.107 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.107 { 00:21:49.108 "params": { 00:21:49.108 "name": "Nvme$subsystem", 00:21:49.108 "trtype": "$TEST_TRANSPORT", 00:21:49.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.108 "adrfam": "ipv4", 00:21:49.108 "trsvcid": "$NVMF_PORT", 00:21:49.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.108 "hdgst": ${hdgst:-false}, 00:21:49.108 "ddgst": ${ddgst:-false} 00:21:49.108 }, 00:21:49.108 "method": "bdev_nvme_attach_controller" 00:21:49.108 } 00:21:49.108 EOF 00:21:49.108 )") 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.108 { 00:21:49.108 "params": { 00:21:49.108 "name": "Nvme$subsystem", 00:21:49.108 "trtype": "$TEST_TRANSPORT", 00:21:49.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.108 "adrfam": "ipv4", 00:21:49.108 "trsvcid": "$NVMF_PORT", 00:21:49.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.108 "hdgst": ${hdgst:-false}, 00:21:49.108 "ddgst": ${ddgst:-false} 00:21:49.108 }, 00:21:49.108 "method": "bdev_nvme_attach_controller" 00:21:49.108 } 00:21:49.108 EOF 00:21:49.108 )") 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:49.108 "params": { 00:21:49.108 "name": "Nvme0", 00:21:49.108 "trtype": "tcp", 00:21:49.108 "traddr": "10.0.0.2", 00:21:49.108 "adrfam": "ipv4", 00:21:49.108 "trsvcid": "4420", 00:21:49.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:49.108 "hdgst": false, 00:21:49.108 "ddgst": false 00:21:49.108 }, 00:21:49.108 "method": "bdev_nvme_attach_controller" 00:21:49.108 },{ 00:21:49.108 "params": { 00:21:49.108 "name": "Nvme1", 00:21:49.108 "trtype": "tcp", 00:21:49.108 "traddr": "10.0.0.2", 00:21:49.108 "adrfam": "ipv4", 00:21:49.108 "trsvcid": "4420", 00:21:49.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.108 "hdgst": false, 00:21:49.108 "ddgst": false 00:21:49.108 }, 00:21:49.108 "method": "bdev_nvme_attach_controller" 00:21:49.108 }' 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:49.108 18:41:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:49.108 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:49.108 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:49.108 fio-3.35 00:21:49.108 Starting 2 threads 00:21:59.078 00:21:59.078 filename0: (groupid=0, jobs=1): err= 0: pid=97595: Mon Jul 15 18:41:21 2024 00:21:59.078 read: IOPS=176, BW=705KiB/s (722kB/s)(7072KiB/10025msec) 00:21:59.078 slat (nsec): min=5801, max=52699, avg=8008.49, stdev=4371.04 00:21:59.078 clat (usec): min=328, max=41430, avg=22655.07, stdev=20159.87 00:21:59.078 lat (usec): min=334, max=41441, avg=22663.08, stdev=20159.66 00:21:59.078 clat percentiles (usec): 00:21:59.078 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:21:59.078 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[40633], 60.00th=[40633], 00:21:59.078 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:59.078 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:21:59.078 | 99.99th=[41681] 00:21:59.078 bw ( KiB/s): min= 576, max= 960, per=49.97%, avg=705.60, stdev=111.08, samples=20 00:21:59.078 iops : min= 144, max= 240, avg=176.40, stdev=27.77, samples=20 00:21:59.078 lat (usec) : 500=40.50%, 750=4.19%, 1000=0.34% 00:21:59.078 lat (msec) : 50=54.98% 00:21:59.078 cpu : usr=92.65%, sys=7.00%, ctx=9, majf=0, minf=0 00:21:59.078 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:59.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.078 issued rwts: total=1768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.078 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:59.078 filename1: (groupid=0, jobs=1): err= 0: pid=97596: Mon Jul 15 18:41:21 2024 00:21:59.078 read: IOPS=176, BW=705KiB/s (722kB/s)(7072KiB/10025msec) 00:21:59.078 slat (nsec): min=5777, max=47041, avg=7937.47, stdev=4128.09 00:21:59.078 clat (usec): min=321, max=41487, avg=22654.75, stdev=20160.70 00:21:59.078 lat (usec): min=327, max=41493, avg=22662.69, stdev=20160.38 00:21:59.078 clat percentiles (usec): 00:21:59.078 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 351], 00:21:59.078 | 30.00th=[ 359], 40.00th=[ 420], 50.00th=[40633], 60.00th=[40633], 00:21:59.078 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:59.078 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:21:59.078 | 99.99th=[41681] 00:21:59.078 bw ( KiB/s): min= 576, max= 897, per=49.97%, avg=705.65, stdev=108.22, samples=20 00:21:59.078 iops : min= 144, max= 224, avg=176.40, stdev=27.03, samples=20 00:21:59.078 lat (usec) : 500=40.50%, 750=4.07%, 1000=0.23% 00:21:59.078 lat (msec) : 2=0.23%, 50=54.98% 00:21:59.078 cpu : usr=92.68%, sys=6.91%, ctx=52, majf=0, minf=9 00:21:59.078 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:59.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.079 issued rwts: total=1768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.079 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:59.079 00:21:59.079 Run status group 0 (all jobs): 00:21:59.079 READ: bw=1411KiB/s (1445kB/s), 705KiB/s-705KiB/s (722kB/s-722kB/s), io=13.8MiB (14.5MB), run=10025-10025msec 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.079 00:21:59.079 real 0m11.172s 00:21:59.079 user 0m19.345s 00:21:59.079 sys 0m1.697s 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.079 ************************************ 00:21:59.079 18:41:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 END TEST fio_dif_1_multi_subsystems 00:21:59.079 ************************************ 00:21:59.079 18:41:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:59.079 18:41:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:59.079 18:41:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:59.079 18:41:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.079 18:41:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 ************************************ 00:21:59.079 START TEST fio_dif_rand_params 00:21:59.079 ************************************ 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 bdev_null0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 [2024-07-15 18:41:21.502532] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.079 { 00:21:59.079 "params": { 00:21:59.079 "name": "Nvme$subsystem", 00:21:59.079 "trtype": "$TEST_TRANSPORT", 00:21:59.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.079 "adrfam": "ipv4", 00:21:59.079 "trsvcid": "$NVMF_PORT", 00:21:59.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.079 "hdgst": ${hdgst:-false}, 00:21:59.079 "ddgst": ${ddgst:-false} 00:21:59.079 }, 00:21:59.079 "method": "bdev_nvme_attach_controller" 00:21:59.079 } 00:21:59.079 EOF 00:21:59.079 )") 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:59.079 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:59.080 "params": { 00:21:59.080 "name": "Nvme0", 00:21:59.080 "trtype": "tcp", 00:21:59.080 "traddr": "10.0.0.2", 00:21:59.080 "adrfam": "ipv4", 00:21:59.080 "trsvcid": "4420", 00:21:59.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:59.080 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:59.080 "hdgst": false, 00:21:59.080 "ddgst": false 00:21:59.080 }, 00:21:59.080 "method": "bdev_nvme_attach_controller" 00:21:59.080 }' 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:59.080 18:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:59.339 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:59.339 ... 00:21:59.339 fio-3.35 00:21:59.339 Starting 3 threads 00:22:05.922 00:22:05.922 filename0: (groupid=0, jobs=1): err= 0: pid=97749: Mon Jul 15 18:41:27 2024 00:22:05.922 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(199MiB/5004msec) 00:22:05.922 slat (nsec): min=5841, max=27940, avg=9064.22, stdev=2851.87 00:22:05.922 clat (usec): min=4203, max=51413, avg=9435.55, stdev=4864.22 00:22:05.922 lat (usec): min=4216, max=51419, avg=9444.61, stdev=4864.21 00:22:05.922 clat percentiles (usec): 00:22:05.922 | 1.00th=[ 5080], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 8455], 00:22:05.922 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:22:05.922 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10159], 00:22:05.922 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51119], 99.95th=[51643], 00:22:05.922 | 99.99th=[51643] 00:22:05.922 bw ( KiB/s): min=33792, max=45312, per=34.62%, avg=40988.44, stdev=3607.79, samples=9 00:22:05.922 iops : min= 264, max= 354, avg=320.22, stdev=28.19, samples=9 00:22:05.922 lat (msec) : 10=91.25%, 20=7.43%, 50=0.31%, 100=1.01% 00:22:05.922 cpu : usr=90.85%, sys=7.98%, ctx=88, majf=0, minf=0 00:22:05.922 IO depths : 1=6.7%, 2=93.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:05.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.922 issued rwts: total=1588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:05.922 filename0: (groupid=0, jobs=1): err= 0: pid=97750: Mon Jul 15 18:41:27 2024 00:22:05.922 read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(171MiB/5005msec) 00:22:05.922 slat (nsec): min=5848, max=45786, avg=8632.73, stdev=4004.76 00:22:05.922 clat (usec): min=3302, max=13163, avg=10976.79, stdev=1879.48 00:22:05.922 lat (usec): min=3308, max=13169, avg=10985.42, stdev=1879.32 00:22:05.922 clat percentiles (usec): 00:22:05.922 | 1.00th=[ 6456], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[10290], 00:22:05.922 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:22:05.922 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12518], 95.00th=[12780], 00:22:05.922 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13173], 99.95th=[13173], 00:22:05.922 | 99.99th=[13173] 00:22:05.922 bw ( KiB/s): min=31488, max=39168, per=29.55%, avg=34986.67, stdev=2462.13, samples=9 00:22:05.922 iops : min= 246, max= 306, avg=273.33, stdev=19.24, samples=9 00:22:05.922 lat (msec) : 4=0.44%, 10=18.97%, 20=80.59% 00:22:05.922 cpu : usr=91.37%, sys=7.57%, ctx=7, majf=0, minf=9 00:22:05.922 IO depths : 1=32.2%, 2=67.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:05.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.922 issued rwts: total=1365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:05.922 filename0: (groupid=0, jobs=1): err= 0: pid=97751: Mon Jul 15 18:41:27 2024 00:22:05.922 read: IOPS=335, BW=41.9MiB/s (43.9MB/s)(210MiB/5004msec) 00:22:05.922 slat (nsec): min=4066, max=48705, avg=9619.35, stdev=3095.67 00:22:05.922 clat (usec): min=4616, max=52615, avg=8937.24, stdev=5309.53 00:22:05.922 lat (usec): min=4626, max=52641, avg=8946.86, stdev=5309.84 00:22:05.922 clat percentiles (usec): 00:22:05.922 | 1.00th=[ 5145], 5.00th=[ 5932], 10.00th=[ 7111], 20.00th=[ 7832], 00:22:05.922 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:22:05.922 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:22:05.922 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51643], 99.95th=[52691], 00:22:05.922 | 99.99th=[52691] 00:22:05.922 bw ( KiB/s): min=33792, max=47616, per=35.86%, avg=42467.56, stdev=4771.23, samples=9 00:22:05.922 iops : min= 264, max= 372, avg=331.78, stdev=37.28, samples=9 00:22:05.922 lat (msec) : 10=97.79%, 20=0.60%, 50=0.95%, 100=0.66% 00:22:05.922 cpu : usr=91.05%, sys=7.76%, ctx=13, majf=0, minf=0 00:22:05.922 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:05.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.922 issued rwts: total=1677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:05.922 00:22:05.922 Run status group 0 (all jobs): 00:22:05.922 READ: bw=116MiB/s (121MB/s), 34.1MiB/s-41.9MiB/s (35.7MB/s-43.9MB/s), io=579MiB (607MB), run=5004-5005msec 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 bdev_null0 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 [2024-07-15 18:41:27.513083] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 bdev_null1 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 bdev_null2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.922 { 00:22:05.922 "params": { 00:22:05.922 "name": "Nvme$subsystem", 00:22:05.922 "trtype": "$TEST_TRANSPORT", 00:22:05.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.922 "adrfam": "ipv4", 00:22:05.922 "trsvcid": "$NVMF_PORT", 00:22:05.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.922 "hdgst": ${hdgst:-false}, 00:22:05.922 "ddgst": ${ddgst:-false} 00:22:05.922 }, 00:22:05.922 "method": "bdev_nvme_attach_controller" 00:22:05.922 } 00:22:05.922 EOF 00:22:05.922 )") 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.922 { 00:22:05.922 "params": { 00:22:05.922 "name": "Nvme$subsystem", 00:22:05.922 "trtype": "$TEST_TRANSPORT", 00:22:05.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.922 "adrfam": "ipv4", 00:22:05.922 "trsvcid": "$NVMF_PORT", 00:22:05.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.922 "hdgst": ${hdgst:-false}, 00:22:05.922 "ddgst": ${ddgst:-false} 00:22:05.922 }, 00:22:05.922 "method": "bdev_nvme_attach_controller" 00:22:05.922 } 00:22:05.922 EOF 00:22:05.922 )") 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.922 { 00:22:05.922 "params": { 00:22:05.922 "name": "Nvme$subsystem", 00:22:05.922 "trtype": "$TEST_TRANSPORT", 00:22:05.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.922 "adrfam": "ipv4", 00:22:05.922 "trsvcid": "$NVMF_PORT", 00:22:05.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.922 "hdgst": ${hdgst:-false}, 00:22:05.922 "ddgst": ${ddgst:-false} 00:22:05.922 }, 00:22:05.922 "method": "bdev_nvme_attach_controller" 00:22:05.922 } 00:22:05.922 EOF 00:22:05.922 )") 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:05.922 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:05.923 "params": { 00:22:05.923 "name": "Nvme0", 00:22:05.923 "trtype": "tcp", 00:22:05.923 "traddr": "10.0.0.2", 00:22:05.923 "adrfam": "ipv4", 00:22:05.923 "trsvcid": "4420", 00:22:05.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:05.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:05.923 "hdgst": false, 00:22:05.923 "ddgst": false 00:22:05.923 }, 00:22:05.923 "method": "bdev_nvme_attach_controller" 00:22:05.923 },{ 00:22:05.923 "params": { 00:22:05.923 "name": "Nvme1", 00:22:05.923 "trtype": "tcp", 00:22:05.923 "traddr": "10.0.0.2", 00:22:05.923 "adrfam": "ipv4", 00:22:05.923 "trsvcid": "4420", 00:22:05.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.923 "hdgst": false, 00:22:05.923 "ddgst": false 00:22:05.923 }, 00:22:05.923 "method": "bdev_nvme_attach_controller" 00:22:05.923 },{ 00:22:05.923 "params": { 00:22:05.923 "name": "Nvme2", 00:22:05.923 "trtype": "tcp", 00:22:05.923 "traddr": "10.0.0.2", 00:22:05.923 "adrfam": "ipv4", 00:22:05.923 "trsvcid": "4420", 00:22:05.923 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:05.923 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:05.923 "hdgst": false, 00:22:05.923 "ddgst": false 00:22:05.923 }, 00:22:05.923 "method": "bdev_nvme_attach_controller" 00:22:05.923 }' 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:05.923 18:41:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:05.923 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:05.923 ... 00:22:05.923 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:05.923 ... 00:22:05.923 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:05.923 ... 00:22:05.923 fio-3.35 00:22:05.923 Starting 24 threads 00:22:18.187 00:22:18.187 filename0: (groupid=0, jobs=1): err= 0: pid=97848: Mon Jul 15 18:41:38 2024 00:22:18.187 read: IOPS=323, BW=1293KiB/s (1324kB/s)(12.7MiB/10031msec) 00:22:18.187 slat (usec): min=5, max=8039, avg=16.56, stdev=243.57 00:22:18.187 clat (msec): min=2, max=136, avg=49.32, stdev=19.20 00:22:18.187 lat (msec): min=2, max=136, avg=49.34, stdev=19.21 00:22:18.187 clat percentiles (msec): 00:22:18.187 | 1.00th=[ 3], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 36], 00:22:18.187 | 30.00th=[ 39], 40.00th=[ 43], 50.00th=[ 48], 60.00th=[ 52], 00:22:18.187 | 70.00th=[ 58], 80.00th=[ 62], 90.00th=[ 74], 95.00th=[ 84], 00:22:18.187 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 138], 99.95th=[ 138], 00:22:18.187 | 99.99th=[ 138] 00:22:18.187 bw ( KiB/s): min= 768, max= 2290, per=4.98%, avg=1293.25, stdev=334.93, samples=20 00:22:18.187 iops : min= 192, max= 572, avg=323.25, stdev=83.64, samples=20 00:22:18.187 lat (msec) : 4=1.48%, 10=1.97%, 50=55.53%, 100=39.59%, 250=1.42% 00:22:18.187 cpu : usr=42.08%, sys=1.67%, ctx=911, majf=0, minf=9 00:22:18.187 IO depths : 1=0.8%, 2=1.7%, 4=8.1%, 8=76.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:22:18.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.187 complete : 0=0.0%, 4=89.5%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.187 issued rwts: total=3243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.187 filename0: (groupid=0, jobs=1): err= 0: pid=97849: Mon Jul 15 18:41:38 2024 00:22:18.187 read: IOPS=305, BW=1224KiB/s (1253kB/s)(12.0MiB/10028msec) 00:22:18.187 slat (usec): min=4, max=3963, avg=11.40, stdev=98.53 00:22:18.187 clat (msec): min=13, max=114, avg=52.22, stdev=17.63 00:22:18.187 lat (msec): min=13, max=114, avg=52.23, stdev=17.63 00:22:18.187 clat percentiles (msec): 00:22:18.187 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 37], 00:22:18.187 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 55], 00:22:18.187 | 70.00th=[ 58], 80.00th=[ 67], 90.00th=[ 79], 95.00th=[ 86], 00:22:18.187 | 99.00th=[ 104], 99.50th=[ 108], 99.90th=[ 115], 99.95th=[ 115], 00:22:18.187 | 99.99th=[ 115] 00:22:18.187 bw ( KiB/s): min= 840, max= 1552, per=4.70%, avg=1220.35, stdev=213.78, samples=20 00:22:18.187 iops : min= 210, max= 388, avg=305.05, stdev=53.40, samples=20 00:22:18.187 lat (msec) : 20=1.04%, 50=50.39%, 100=47.52%, 250=1.04% 00:22:18.187 cpu : usr=41.30%, sys=1.96%, ctx=1320, majf=0, minf=9 00:22:18.187 IO depths : 1=0.8%, 2=2.1%, 4=8.8%, 8=75.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:22:18.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.187 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.187 issued rwts: total=3068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.187 filename0: (groupid=0, jobs=1): err= 0: pid=97850: Mon Jul 15 18:41:38 2024 00:22:18.187 read: IOPS=244, BW=980KiB/s (1003kB/s)(9800KiB/10001msec) 00:22:18.187 slat (usec): min=2, max=8015, avg=17.70, stdev=242.44 00:22:18.187 clat (usec): min=292, max=133656, avg=65180.69, stdev=21901.31 00:22:18.187 lat (usec): min=298, max=133669, avg=65198.39, stdev=21908.23 00:22:18.187 clat percentiles (msec): 00:22:18.187 | 1.00th=[ 20], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:22:18.187 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 70], 00:22:18.187 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 107], 00:22:18.187 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 134], 99.95th=[ 134], 00:22:18.187 | 99.99th=[ 134] 00:22:18.187 bw ( KiB/s): min= 720, max= 1280, per=3.70%, avg=962.95, stdev=159.74, samples=19 00:22:18.187 iops : min= 180, max= 320, avg=240.74, stdev=39.93, samples=19 00:22:18.187 lat (usec) : 500=0.12% 00:22:18.187 lat (msec) : 4=0.65%, 10=0.08%, 20=0.57%, 50=24.33%, 100=66.61% 00:22:18.187 lat (msec) : 250=7.63% 00:22:18.187 cpu : usr=35.21%, sys=1.53%, ctx=1115, majf=0, minf=9 00:22:18.187 IO depths : 1=2.2%, 2=5.0%, 4=14.3%, 8=67.3%, 16=11.1%, 32=0.0%, >=64=0.0% 00:22:18.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.187 complete : 0=0.0%, 4=91.3%, 8=3.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.187 issued rwts: total=2450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.187 filename0: (groupid=0, jobs=1): err= 0: pid=97851: Mon Jul 15 18:41:38 2024 00:22:18.187 read: IOPS=235, BW=943KiB/s (966kB/s)(9436KiB/10006msec) 00:22:18.187 slat (nsec): min=3113, max=29763, avg=10237.49, stdev=4327.26 00:22:18.187 clat (msec): min=19, max=156, avg=67.79, stdev=20.53 00:22:18.187 lat (msec): min=19, max=156, avg=67.80, stdev=20.53 00:22:18.187 clat percentiles (msec): 00:22:18.187 | 1.00th=[ 27], 5.00th=[ 38], 10.00th=[ 46], 20.00th=[ 51], 00:22:18.187 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:22:18.187 | 70.00th=[ 74], 80.00th=[ 86], 90.00th=[ 95], 95.00th=[ 105], 00:22:18.187 | 99.00th=[ 126], 99.50th=[ 138], 99.90th=[ 157], 99.95th=[ 157], 00:22:18.187 | 99.99th=[ 157] 00:22:18.187 bw ( KiB/s): min= 640, max= 1200, per=3.58%, avg=930.11, stdev=165.11, samples=19 00:22:18.187 iops : min= 160, max= 300, avg=232.53, stdev=41.28, samples=19 00:22:18.187 lat (msec) : 20=0.42%, 50=18.86%, 100=74.06%, 250=6.66% 00:22:18.187 cpu : usr=36.01%, sys=1.70%, ctx=994, majf=0, minf=9 00:22:18.187 IO depths : 1=1.9%, 2=4.2%, 4=12.9%, 8=69.6%, 16=11.4%, 32=0.0%, >=64=0.0% 00:22:18.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.188 filename0: (groupid=0, jobs=1): err= 0: pid=97852: Mon Jul 15 18:41:38 2024 00:22:18.188 read: IOPS=268, BW=1074KiB/s (1099kB/s)(10.5MiB/10007msec) 00:22:18.188 slat (usec): min=4, max=4004, avg=13.00, stdev=112.40 00:22:18.188 clat (msec): min=24, max=140, avg=59.50, stdev=18.30 00:22:18.188 lat (msec): min=24, max=140, avg=59.52, stdev=18.30 00:22:18.188 clat percentiles (msec): 00:22:18.188 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 44], 00:22:18.188 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 61], 00:22:18.188 | 70.00th=[ 69], 80.00th=[ 75], 90.00th=[ 84], 95.00th=[ 91], 00:22:18.188 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 142], 99.95th=[ 142], 00:22:18.188 | 99.99th=[ 142] 00:22:18.188 bw ( KiB/s): min= 728, max= 1392, per=4.10%, avg=1065.68, stdev=188.16, samples=19 00:22:18.188 iops : min= 182, max= 348, avg=266.42, stdev=47.04, samples=19 00:22:18.188 lat (msec) : 50=32.35%, 100=64.56%, 250=3.09% 00:22:18.188 cpu : usr=40.89%, sys=1.44%, ctx=1250, majf=0, minf=9 00:22:18.188 IO depths : 1=1.3%, 2=3.0%, 4=11.5%, 8=72.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:22:18.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 issued rwts: total=2686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.188 filename0: (groupid=0, jobs=1): err= 0: pid=97853: Mon Jul 15 18:41:38 2024 00:22:18.188 read: IOPS=301, BW=1205KiB/s (1234kB/s)(11.8MiB/10025msec) 00:22:18.188 slat (usec): min=2, max=4026, avg=11.96, stdev=103.40 00:22:18.188 clat (msec): min=23, max=130, avg=53.02, stdev=19.90 00:22:18.188 lat (msec): min=23, max=130, avg=53.03, stdev=19.90 00:22:18.188 clat percentiles (msec): 00:22:18.188 | 1.00th=[ 26], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 36], 00:22:18.188 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 54], 00:22:18.188 | 70.00th=[ 60], 80.00th=[ 70], 90.00th=[ 84], 95.00th=[ 95], 00:22:18.188 | 99.00th=[ 109], 99.50th=[ 115], 99.90th=[ 117], 99.95th=[ 117], 00:22:18.188 | 99.99th=[ 131] 00:22:18.188 bw ( KiB/s): min= 640, max= 1600, per=4.63%, avg=1203.15, stdev=287.74, samples=20 00:22:18.188 iops : min= 160, max= 400, avg=300.75, stdev=71.91, samples=20 00:22:18.188 lat (msec) : 50=54.97%, 100=42.68%, 250=2.35% 00:22:18.188 cpu : usr=38.98%, sys=1.95%, ctx=1312, majf=0, minf=9 00:22:18.188 IO depths : 1=0.2%, 2=0.5%, 4=5.5%, 8=79.9%, 16=13.8%, 32=0.0%, >=64=0.0% 00:22:18.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 complete : 0=0.0%, 4=89.0%, 8=7.0%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 issued rwts: total=3020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.188 filename0: (groupid=0, jobs=1): err= 0: pid=97855: Mon Jul 15 18:41:38 2024 00:22:18.188 read: IOPS=242, BW=972KiB/s (995kB/s)(9720KiB/10005msec) 00:22:18.188 slat (usec): min=4, max=9019, avg=16.18, stdev=244.72 00:22:18.188 clat (msec): min=25, max=143, avg=65.79, stdev=20.40 00:22:18.188 lat (msec): min=25, max=143, avg=65.81, stdev=20.40 00:22:18.188 clat percentiles (msec): 00:22:18.188 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:22:18.188 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 70], 00:22:18.188 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 90], 95.00th=[ 107], 00:22:18.188 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:22:18.188 | 99.99th=[ 144] 00:22:18.188 bw ( KiB/s): min= 640, max= 1280, per=3.70%, avg=960.42, stdev=180.44, samples=19 00:22:18.188 iops : min= 160, max= 320, avg=240.11, stdev=45.11, samples=19 00:22:18.188 lat (msec) : 50=23.87%, 100=70.49%, 250=5.64% 00:22:18.188 cpu : usr=31.03%, sys=1.52%, ctx=899, majf=0, minf=9 00:22:18.188 IO depths : 1=1.9%, 2=4.3%, 4=13.1%, 8=69.4%, 16=11.3%, 32=0.0%, >=64=0.0% 00:22:18.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.188 filename0: (groupid=0, jobs=1): err= 0: pid=97856: Mon Jul 15 18:41:38 2024 00:22:18.188 read: IOPS=314, BW=1258KiB/s (1288kB/s)(12.3MiB/10033msec) 00:22:18.188 slat (usec): min=4, max=4004, avg=10.44, stdev=71.24 00:22:18.188 clat (msec): min=6, max=114, avg=50.78, stdev=18.32 00:22:18.188 lat (msec): min=7, max=115, avg=50.79, stdev=18.32 00:22:18.188 clat percentiles (msec): 00:22:18.188 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 36], 00:22:18.188 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 53], 00:22:18.188 | 70.00th=[ 58], 80.00th=[ 66], 90.00th=[ 77], 95.00th=[ 88], 00:22:18.188 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 115], 99.95th=[ 115], 00:22:18.188 | 99.99th=[ 115] 00:22:18.188 bw ( KiB/s): min= 816, max= 1584, per=4.83%, avg=1255.40, stdev=249.98, samples=20 00:22:18.188 iops : min= 204, max= 396, avg=313.80, stdev=62.46, samples=20 00:22:18.188 lat (msec) : 10=0.51%, 20=1.01%, 50=55.96%, 100=41.13%, 250=1.39% 00:22:18.188 cpu : usr=42.40%, sys=1.76%, ctx=1369, majf=0, minf=9 00:22:18.188 IO depths : 1=0.2%, 2=0.4%, 4=5.2%, 8=79.8%, 16=14.4%, 32=0.0%, >=64=0.0% 00:22:18.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 complete : 0=0.0%, 4=89.0%, 8=7.5%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 issued rwts: total=3156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.188 filename1: (groupid=0, jobs=1): err= 0: pid=97857: Mon Jul 15 18:41:38 2024 00:22:18.188 read: IOPS=306, BW=1225KiB/s (1254kB/s)(12.0MiB/10033msec) 00:22:18.188 slat (usec): min=5, max=4016, avg=11.55, stdev=95.36 00:22:18.188 clat (msec): min=2, max=130, avg=52.13, stdev=19.24 00:22:18.188 lat (msec): min=2, max=130, avg=52.14, stdev=19.24 00:22:18.188 clat percentiles (msec): 00:22:18.188 | 1.00th=[ 4], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 39], 00:22:18.188 | 30.00th=[ 42], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 56], 00:22:18.188 | 70.00th=[ 59], 80.00th=[ 67], 90.00th=[ 74], 95.00th=[ 84], 00:22:18.188 | 99.00th=[ 108], 99.50th=[ 118], 99.90th=[ 131], 99.95th=[ 131], 00:22:18.188 | 99.99th=[ 131] 00:22:18.188 bw ( KiB/s): min= 896, max= 2304, per=4.70%, avg=1222.75, stdev=293.49, samples=20 00:22:18.188 iops : min= 224, max= 576, avg=305.60, stdev=73.40, samples=20 00:22:18.188 lat (msec) : 4=1.11%, 10=2.54%, 20=0.52%, 50=41.67%, 100=52.64% 00:22:18.188 lat (msec) : 250=1.53% 00:22:18.188 cpu : usr=43.67%, sys=1.92%, ctx=1273, majf=0, minf=9 00:22:18.188 IO depths : 1=2.0%, 2=4.4%, 4=13.1%, 8=69.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:22:18.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 issued rwts: total=3072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.188 filename1: (groupid=0, jobs=1): err= 0: pid=97858: Mon Jul 15 18:41:38 2024 00:22:18.188 read: IOPS=294, BW=1178KiB/s (1207kB/s)(11.5MiB/10034msec) 00:22:18.188 slat (usec): min=4, max=4019, avg=11.77, stdev=104.37 00:22:18.188 clat (msec): min=5, max=137, avg=54.23, stdev=19.96 00:22:18.188 lat (msec): min=5, max=137, avg=54.25, stdev=19.96 00:22:18.188 clat percentiles (msec): 00:22:18.188 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 38], 00:22:18.188 | 30.00th=[ 42], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 57], 00:22:18.188 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 92], 00:22:18.188 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 138], 99.95th=[ 138], 00:22:18.188 | 99.99th=[ 138] 00:22:18.188 bw ( KiB/s): min= 728, max= 1574, per=4.52%, avg=1175.10, stdev=223.68, samples=20 00:22:18.188 iops : min= 182, max= 393, avg=293.75, stdev=55.87, samples=20 00:22:18.188 lat (msec) : 10=1.08%, 20=1.08%, 50=45.67%, 100=49.70%, 250=2.47% 00:22:18.188 cpu : usr=40.73%, sys=1.68%, ctx=1197, majf=0, minf=9 00:22:18.188 IO depths : 1=1.7%, 2=3.4%, 4=10.6%, 8=72.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:22:18.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 issued rwts: total=2956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.188 filename1: (groupid=0, jobs=1): err= 0: pid=97859: Mon Jul 15 18:41:38 2024 00:22:18.188 read: IOPS=258, BW=1033KiB/s (1058kB/s)(10.1MiB/10003msec) 00:22:18.188 slat (usec): min=2, max=8015, avg=16.08, stdev=222.71 00:22:18.188 clat (msec): min=12, max=141, avg=61.86, stdev=20.64 00:22:18.188 lat (msec): min=12, max=141, avg=61.88, stdev=20.64 00:22:18.188 clat percentiles (msec): 00:22:18.188 | 1.00th=[ 26], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 47], 00:22:18.188 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 64], 00:22:18.188 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 88], 95.00th=[ 99], 00:22:18.188 | 99.00th=[ 122], 99.50th=[ 138], 99.90th=[ 142], 99.95th=[ 142], 00:22:18.188 | 99.99th=[ 142] 00:22:18.188 bw ( KiB/s): min= 768, max= 1376, per=3.93%, avg=1021.95, stdev=170.73, samples=19 00:22:18.188 iops : min= 192, max= 344, avg=255.47, stdev=42.70, samples=19 00:22:18.188 lat (msec) : 20=0.23%, 50=35.84%, 100=59.75%, 250=4.18% 00:22:18.188 cpu : usr=35.24%, sys=1.61%, ctx=950, majf=0, minf=9 00:22:18.188 IO depths : 1=1.4%, 2=3.3%, 4=10.9%, 8=72.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:22:18.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 issued rwts: total=2584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.188 filename1: (groupid=0, jobs=1): err= 0: pid=97862: Mon Jul 15 18:41:38 2024 00:22:18.188 read: IOPS=239, BW=956KiB/s (979kB/s)(9576KiB/10013msec) 00:22:18.188 slat (nsec): min=5870, max=35113, avg=9418.40, stdev=4125.96 00:22:18.188 clat (msec): min=19, max=153, avg=66.84, stdev=20.70 00:22:18.188 lat (msec): min=19, max=153, avg=66.85, stdev=20.70 00:22:18.188 clat percentiles (msec): 00:22:18.188 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 50], 00:22:18.188 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:22:18.188 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 106], 00:22:18.188 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:22:18.188 | 99.99th=[ 155] 00:22:18.188 bw ( KiB/s): min= 680, max= 1384, per=3.66%, avg=951.15, stdev=188.05, samples=20 00:22:18.188 iops : min= 170, max= 346, avg=237.75, stdev=47.04, samples=20 00:22:18.188 lat (msec) : 20=0.25%, 50=21.14%, 100=72.35%, 250=6.27% 00:22:18.188 cpu : usr=35.90%, sys=1.63%, ctx=842, majf=0, minf=9 00:22:18.188 IO depths : 1=2.4%, 2=5.2%, 4=14.9%, 8=66.8%, 16=10.6%, 32=0.0%, >=64=0.0% 00:22:18.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.188 issued rwts: total=2394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.188 filename1: (groupid=0, jobs=1): err= 0: pid=97863: Mon Jul 15 18:41:38 2024 00:22:18.188 read: IOPS=277, BW=1110KiB/s (1136kB/s)(10.9MiB/10032msec) 00:22:18.188 slat (usec): min=5, max=8020, avg=16.41, stdev=218.20 00:22:18.188 clat (msec): min=13, max=138, avg=57.51, stdev=19.93 00:22:18.188 lat (msec): min=13, max=138, avg=57.52, stdev=19.94 00:22:18.188 clat percentiles (msec): 00:22:18.188 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 38], 00:22:18.188 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 61], 00:22:18.189 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 00:22:18.189 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 138], 00:22:18.189 | 99.99th=[ 138] 00:22:18.189 bw ( KiB/s): min= 640, max= 1472, per=4.27%, avg=1108.75, stdev=219.24, samples=20 00:22:18.189 iops : min= 160, max= 368, avg=277.15, stdev=54.75, samples=20 00:22:18.189 lat (msec) : 20=0.57%, 50=41.14%, 100=54.26%, 250=4.02% 00:22:18.189 cpu : usr=35.35%, sys=1.65%, ctx=1026, majf=0, minf=9 00:22:18.189 IO depths : 1=0.6%, 2=1.3%, 4=7.1%, 8=78.3%, 16=12.8%, 32=0.0%, >=64=0.0% 00:22:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 complete : 0=0.0%, 4=89.3%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 issued rwts: total=2783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.189 filename1: (groupid=0, jobs=1): err= 0: pid=97864: Mon Jul 15 18:41:38 2024 00:22:18.189 read: IOPS=242, BW=971KiB/s (994kB/s)(9720KiB/10011msec) 00:22:18.189 slat (usec): min=3, max=4054, avg=11.10, stdev=82.16 00:22:18.189 clat (msec): min=19, max=123, avg=65.81, stdev=18.11 00:22:18.189 lat (msec): min=19, max=123, avg=65.82, stdev=18.11 00:22:18.189 clat percentiles (msec): 00:22:18.189 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 52], 00:22:18.189 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 69], 00:22:18.189 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 90], 95.00th=[ 105], 00:22:18.189 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 124], 99.95th=[ 124], 00:22:18.189 | 99.99th=[ 124] 00:22:18.189 bw ( KiB/s): min= 688, max= 1216, per=3.68%, avg=955.79, stdev=157.14, samples=19 00:22:18.189 iops : min= 172, max= 304, avg=238.95, stdev=39.28, samples=19 00:22:18.189 lat (msec) : 20=0.41%, 50=17.98%, 100=75.47%, 250=6.13% 00:22:18.189 cpu : usr=42.91%, sys=2.17%, ctx=1438, majf=0, minf=9 00:22:18.189 IO depths : 1=3.5%, 2=7.5%, 4=17.7%, 8=61.6%, 16=9.6%, 32=0.0%, >=64=0.0% 00:22:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 complete : 0=0.0%, 4=92.2%, 8=2.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.189 filename1: (groupid=0, jobs=1): err= 0: pid=97865: Mon Jul 15 18:41:38 2024 00:22:18.189 read: IOPS=264, BW=1060KiB/s (1085kB/s)(10.4MiB/10018msec) 00:22:18.189 slat (usec): min=2, max=8036, avg=15.93, stdev=220.08 00:22:18.189 clat (msec): min=21, max=120, avg=60.26, stdev=18.15 00:22:18.189 lat (msec): min=21, max=120, avg=60.28, stdev=18.15 00:22:18.189 clat percentiles (msec): 00:22:18.189 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 48], 00:22:18.189 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 61], 00:22:18.189 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 95], 00:22:18.189 | 99.00th=[ 109], 99.50th=[ 115], 99.90th=[ 122], 99.95th=[ 122], 00:22:18.189 | 99.99th=[ 122] 00:22:18.189 bw ( KiB/s): min= 816, max= 1328, per=4.07%, avg=1058.80, stdev=140.96, samples=20 00:22:18.189 iops : min= 204, max= 332, avg=264.70, stdev=35.24, samples=20 00:22:18.189 lat (msec) : 50=36.44%, 100=60.74%, 250=2.83% 00:22:18.189 cpu : usr=33.83%, sys=1.48%, ctx=954, majf=0, minf=9 00:22:18.189 IO depths : 1=0.8%, 2=2.2%, 4=9.4%, 8=74.4%, 16=13.2%, 32=0.0%, >=64=0.0% 00:22:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 complete : 0=0.0%, 4=89.9%, 8=6.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 issued rwts: total=2654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.189 filename1: (groupid=0, jobs=1): err= 0: pid=97866: Mon Jul 15 18:41:38 2024 00:22:18.189 read: IOPS=294, BW=1178KiB/s (1206kB/s)(11.5MiB/10014msec) 00:22:18.189 slat (usec): min=5, max=7017, avg=11.49, stdev=129.26 00:22:18.189 clat (msec): min=22, max=152, avg=54.24, stdev=19.89 00:22:18.189 lat (msec): min=22, max=152, avg=54.26, stdev=19.88 00:22:18.189 clat percentiles (msec): 00:22:18.189 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:22:18.189 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 51], 60.00th=[ 56], 00:22:18.189 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 90], 00:22:18.189 | 99.00th=[ 117], 99.50th=[ 125], 99.90th=[ 153], 99.95th=[ 153], 00:22:18.189 | 99.99th=[ 153] 00:22:18.189 bw ( KiB/s): min= 728, max= 1552, per=4.52%, avg=1175.20, stdev=254.76, samples=20 00:22:18.189 iops : min= 182, max= 388, avg=293.80, stdev=63.69, samples=20 00:22:18.189 lat (msec) : 50=50.75%, 100=45.93%, 250=3.32% 00:22:18.189 cpu : usr=37.21%, sys=1.82%, ctx=1393, majf=0, minf=9 00:22:18.189 IO depths : 1=0.8%, 2=2.2%, 4=10.1%, 8=74.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:22:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 issued rwts: total=2948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.189 filename2: (groupid=0, jobs=1): err= 0: pid=97867: Mon Jul 15 18:41:38 2024 00:22:18.189 read: IOPS=279, BW=1117KiB/s (1144kB/s)(10.9MiB/10017msec) 00:22:18.189 slat (usec): min=5, max=8020, avg=20.95, stdev=302.60 00:22:18.189 clat (msec): min=20, max=132, avg=57.16, stdev=19.47 00:22:18.189 lat (msec): min=20, max=132, avg=57.18, stdev=19.47 00:22:18.189 clat percentiles (msec): 00:22:18.189 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:22:18.189 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 59], 00:22:18.189 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 96], 00:22:18.189 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 133], 00:22:18.189 | 99.99th=[ 133] 00:22:18.189 bw ( KiB/s): min= 736, max= 1472, per=4.30%, avg=1116.45, stdev=231.18, samples=20 00:22:18.189 iops : min= 184, max= 368, avg=279.10, stdev=57.79, samples=20 00:22:18.189 lat (msec) : 50=43.17%, 100=54.00%, 250=2.82% 00:22:18.189 cpu : usr=32.37%, sys=1.32%, ctx=903, majf=0, minf=9 00:22:18.189 IO depths : 1=0.4%, 2=1.1%, 4=6.8%, 8=78.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:22:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 complete : 0=0.0%, 4=89.5%, 8=6.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 issued rwts: total=2798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.189 filename2: (groupid=0, jobs=1): err= 0: pid=97868: Mon Jul 15 18:41:38 2024 00:22:18.189 read: IOPS=257, BW=1029KiB/s (1054kB/s)(10.1MiB/10029msec) 00:22:18.189 slat (usec): min=5, max=8020, avg=15.66, stdev=209.52 00:22:18.189 clat (msec): min=18, max=138, avg=62.01, stdev=19.94 00:22:18.189 lat (msec): min=18, max=138, avg=62.02, stdev=19.95 00:22:18.189 clat percentiles (msec): 00:22:18.189 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 47], 00:22:18.189 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:22:18.189 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 91], 00:22:18.189 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 140], 99.95th=[ 140], 00:22:18.189 | 99.99th=[ 140] 00:22:18.189 bw ( KiB/s): min= 688, max= 1424, per=3.95%, avg=1025.35, stdev=193.87, samples=20 00:22:18.189 iops : min= 172, max= 356, avg=256.30, stdev=48.46, samples=20 00:22:18.189 lat (msec) : 20=0.62%, 50=34.50%, 100=60.81%, 250=4.07% 00:22:18.189 cpu : usr=32.26%, sys=1.24%, ctx=896, majf=0, minf=9 00:22:18.189 IO depths : 1=2.4%, 2=5.1%, 4=13.8%, 8=68.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:22:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 issued rwts: total=2580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.189 filename2: (groupid=0, jobs=1): err= 0: pid=97869: Mon Jul 15 18:41:38 2024 00:22:18.189 read: IOPS=293, BW=1176KiB/s (1204kB/s)(11.5MiB/10031msec) 00:22:18.189 slat (usec): min=4, max=8015, avg=14.33, stdev=180.66 00:22:18.189 clat (msec): min=14, max=119, avg=54.33, stdev=17.75 00:22:18.189 lat (msec): min=14, max=119, avg=54.34, stdev=17.75 00:22:18.189 clat percentiles (msec): 00:22:18.189 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 40], 00:22:18.189 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 57], 00:22:18.189 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 79], 95.00th=[ 88], 00:22:18.189 | 99.00th=[ 109], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 120], 00:22:18.189 | 99.99th=[ 120] 00:22:18.189 bw ( KiB/s): min= 768, max= 1552, per=4.52%, avg=1174.80, stdev=202.96, samples=20 00:22:18.189 iops : min= 192, max= 388, avg=293.70, stdev=50.74, samples=20 00:22:18.189 lat (msec) : 20=0.54%, 50=44.64%, 100=52.65%, 250=2.17% 00:22:18.189 cpu : usr=40.11%, sys=1.82%, ctx=1153, majf=0, minf=9 00:22:18.189 IO depths : 1=0.5%, 2=1.0%, 4=7.0%, 8=78.1%, 16=13.5%, 32=0.0%, >=64=0.0% 00:22:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 complete : 0=0.0%, 4=89.2%, 8=6.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 issued rwts: total=2948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.189 filename2: (groupid=0, jobs=1): err= 0: pid=97870: Mon Jul 15 18:41:38 2024 00:22:18.189 read: IOPS=261, BW=1045KiB/s (1070kB/s)(10.2MiB/10012msec) 00:22:18.189 slat (usec): min=3, max=8017, avg=12.82, stdev=156.64 00:22:18.189 clat (msec): min=16, max=159, avg=61.19, stdev=20.72 00:22:18.189 lat (msec): min=16, max=159, avg=61.21, stdev=20.72 00:22:18.189 clat percentiles (msec): 00:22:18.189 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 46], 00:22:18.189 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 62], 00:22:18.189 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 89], 95.00th=[ 102], 00:22:18.189 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 161], 99.95th=[ 161], 00:22:18.189 | 99.99th=[ 161] 00:22:18.189 bw ( KiB/s): min= 640, max= 1328, per=4.00%, avg=1039.50, stdev=198.77, samples=20 00:22:18.189 iops : min= 160, max= 332, avg=259.85, stdev=49.70, samples=20 00:22:18.189 lat (msec) : 20=0.38%, 50=35.56%, 100=58.74%, 250=5.32% 00:22:18.189 cpu : usr=33.12%, sys=1.43%, ctx=943, majf=0, minf=9 00:22:18.189 IO depths : 1=1.1%, 2=2.8%, 4=11.0%, 8=72.4%, 16=12.6%, 32=0.0%, >=64=0.0% 00:22:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 complete : 0=0.0%, 4=90.4%, 8=5.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 issued rwts: total=2615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.189 filename2: (groupid=0, jobs=1): err= 0: pid=97871: Mon Jul 15 18:41:38 2024 00:22:18.189 read: IOPS=236, BW=946KiB/s (968kB/s)(9460KiB/10004msec) 00:22:18.189 slat (nsec): min=2902, max=37058, avg=9333.62, stdev=4136.50 00:22:18.189 clat (msec): min=3, max=143, avg=67.60, stdev=18.04 00:22:18.189 lat (msec): min=3, max=143, avg=67.61, stdev=18.04 00:22:18.189 clat percentiles (msec): 00:22:18.189 | 1.00th=[ 25], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:22:18.189 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:22:18.189 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 93], 95.00th=[ 97], 00:22:18.189 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 144], 00:22:18.189 | 99.99th=[ 144] 00:22:18.189 bw ( KiB/s): min= 736, max= 1152, per=3.58%, avg=931.79, stdev=114.09, samples=19 00:22:18.189 iops : min= 184, max= 288, avg=232.95, stdev=28.52, samples=19 00:22:18.189 lat (msec) : 4=0.34%, 50=13.57%, 100=82.07%, 250=4.02% 00:22:18.189 cpu : usr=38.61%, sys=1.74%, ctx=1316, majf=0, minf=9 00:22:18.189 IO depths : 1=2.3%, 2=4.8%, 4=13.8%, 8=67.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:22:18.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.189 issued rwts: total=2365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.189 filename2: (groupid=0, jobs=1): err= 0: pid=97872: Mon Jul 15 18:41:38 2024 00:22:18.190 read: IOPS=235, BW=943KiB/s (965kB/s)(9432KiB/10004msec) 00:22:18.190 slat (usec): min=2, max=12014, avg=21.21, stdev=346.51 00:22:18.190 clat (msec): min=6, max=130, avg=67.72, stdev=19.91 00:22:18.190 lat (msec): min=6, max=130, avg=67.74, stdev=19.91 00:22:18.190 clat percentiles (msec): 00:22:18.190 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 50], 00:22:18.190 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 72], 00:22:18.190 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 108], 00:22:18.190 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 131], 99.95th=[ 131], 00:22:18.190 | 99.99th=[ 131] 00:22:18.190 bw ( KiB/s): min= 640, max= 1136, per=3.55%, avg=922.16, stdev=127.29, samples=19 00:22:18.190 iops : min= 160, max= 284, avg=230.53, stdev=31.84, samples=19 00:22:18.190 lat (msec) : 10=0.08%, 20=0.30%, 50=21.88%, 100=70.36%, 250=7.38% 00:22:18.190 cpu : usr=33.57%, sys=1.29%, ctx=980, majf=0, minf=9 00:22:18.190 IO depths : 1=2.8%, 2=7.0%, 4=19.0%, 8=61.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:22:18.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.190 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.190 issued rwts: total=2358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.190 filename2: (groupid=0, jobs=1): err= 0: pid=97873: Mon Jul 15 18:41:38 2024 00:22:18.190 read: IOPS=241, BW=967KiB/s (990kB/s)(9684KiB/10012msec) 00:22:18.190 slat (usec): min=4, max=8032, avg=21.11, stdev=278.75 00:22:18.190 clat (msec): min=16, max=128, avg=66.07, stdev=18.89 00:22:18.190 lat (msec): min=16, max=128, avg=66.09, stdev=18.89 00:22:18.190 clat percentiles (msec): 00:22:18.190 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 49], 00:22:18.190 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 71], 00:22:18.190 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 96], 00:22:18.190 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:22:18.190 | 99.99th=[ 129] 00:22:18.190 bw ( KiB/s): min= 678, max= 1328, per=3.70%, avg=961.95, stdev=173.62, samples=20 00:22:18.190 iops : min= 169, max= 332, avg=240.45, stdev=43.45, samples=20 00:22:18.190 lat (msec) : 20=0.41%, 50=22.14%, 100=73.11%, 250=4.34% 00:22:18.190 cpu : usr=34.69%, sys=1.48%, ctx=942, majf=0, minf=9 00:22:18.190 IO depths : 1=2.2%, 2=4.8%, 4=14.2%, 8=67.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:22:18.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.190 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.190 issued rwts: total=2421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.190 filename2: (groupid=0, jobs=1): err= 0: pid=97874: Mon Jul 15 18:41:38 2024 00:22:18.190 read: IOPS=283, BW=1133KiB/s (1160kB/s)(11.1MiB/10013msec) 00:22:18.190 slat (usec): min=5, max=4028, avg=16.15, stdev=161.28 00:22:18.190 clat (msec): min=12, max=128, avg=56.39, stdev=18.83 00:22:18.190 lat (msec): min=12, max=128, avg=56.41, stdev=18.83 00:22:18.190 clat percentiles (msec): 00:22:18.190 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 40], 00:22:18.190 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 57], 00:22:18.190 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 93], 00:22:18.190 | 99.00th=[ 106], 99.50th=[ 113], 99.90th=[ 129], 99.95th=[ 129], 00:22:18.190 | 99.99th=[ 129] 00:22:18.190 bw ( KiB/s): min= 792, max= 1632, per=4.35%, avg=1129.55, stdev=235.76, samples=20 00:22:18.190 iops : min= 198, max= 408, avg=282.35, stdev=58.98, samples=20 00:22:18.190 lat (msec) : 20=0.21%, 50=41.15%, 100=56.42%, 250=2.22% 00:22:18.190 cpu : usr=42.07%, sys=1.74%, ctx=1425, majf=0, minf=9 00:22:18.190 IO depths : 1=1.1%, 2=2.5%, 4=9.8%, 8=74.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:22:18.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.190 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.190 issued rwts: total=2836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:18.190 00:22:18.190 Run status group 0 (all jobs): 00:22:18.190 READ: bw=25.4MiB/s (26.6MB/s), 943KiB/s-1293KiB/s (965kB/s-1324kB/s), io=255MiB (267MB), run=10001-10034msec 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 bdev_null0 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 [2024-07-15 18:41:38.982778] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 bdev_null1 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:18.190 18:41:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:18.190 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:18.190 { 00:22:18.190 "params": { 00:22:18.191 "name": "Nvme$subsystem", 00:22:18.191 "trtype": "$TEST_TRANSPORT", 00:22:18.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.191 "adrfam": "ipv4", 00:22:18.191 "trsvcid": "$NVMF_PORT", 00:22:18.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.191 "hdgst": ${hdgst:-false}, 00:22:18.191 "ddgst": ${ddgst:-false} 00:22:18.191 }, 00:22:18.191 "method": "bdev_nvme_attach_controller" 00:22:18.191 } 00:22:18.191 EOF 00:22:18.191 )") 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:18.191 { 00:22:18.191 "params": { 00:22:18.191 "name": "Nvme$subsystem", 00:22:18.191 "trtype": "$TEST_TRANSPORT", 00:22:18.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.191 "adrfam": "ipv4", 00:22:18.191 "trsvcid": "$NVMF_PORT", 00:22:18.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.191 "hdgst": ${hdgst:-false}, 00:22:18.191 "ddgst": ${ddgst:-false} 00:22:18.191 }, 00:22:18.191 "method": "bdev_nvme_attach_controller" 00:22:18.191 } 00:22:18.191 EOF 00:22:18.191 )") 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:18.191 "params": { 00:22:18.191 "name": "Nvme0", 00:22:18.191 "trtype": "tcp", 00:22:18.191 "traddr": "10.0.0.2", 00:22:18.191 "adrfam": "ipv4", 00:22:18.191 "trsvcid": "4420", 00:22:18.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:18.191 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:18.191 "hdgst": false, 00:22:18.191 "ddgst": false 00:22:18.191 }, 00:22:18.191 "method": "bdev_nvme_attach_controller" 00:22:18.191 },{ 00:22:18.191 "params": { 00:22:18.191 "name": "Nvme1", 00:22:18.191 "trtype": "tcp", 00:22:18.191 "traddr": "10.0.0.2", 00:22:18.191 "adrfam": "ipv4", 00:22:18.191 "trsvcid": "4420", 00:22:18.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.191 "hdgst": false, 00:22:18.191 "ddgst": false 00:22:18.191 }, 00:22:18.191 "method": "bdev_nvme_attach_controller" 00:22:18.191 }' 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:18.191 18:41:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:18.191 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:18.191 ... 00:22:18.191 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:18.191 ... 00:22:18.191 fio-3.35 00:22:18.191 Starting 4 threads 00:22:22.377 00:22:22.377 filename0: (groupid=0, jobs=1): err= 0: pid=98006: Mon Jul 15 18:41:44 2024 00:22:22.377 read: IOPS=2547, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5002msec) 00:22:22.377 slat (nsec): min=5822, max=42072, avg=7015.74, stdev=2158.72 00:22:22.377 clat (usec): min=2252, max=4307, avg=3106.00, stdev=96.91 00:22:22.377 lat (usec): min=2258, max=4331, avg=3113.02, stdev=97.05 00:22:22.377 clat percentiles (usec): 00:22:22.377 | 1.00th=[ 2769], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3064], 00:22:22.377 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3097], 00:22:22.377 | 70.00th=[ 3130], 80.00th=[ 3163], 90.00th=[ 3195], 95.00th=[ 3228], 00:22:22.377 | 99.00th=[ 3425], 99.50th=[ 3458], 99.90th=[ 3785], 99.95th=[ 4293], 00:22:22.377 | 99.99th=[ 4293] 00:22:22.377 bw ( KiB/s): min=20224, max=20608, per=25.02%, avg=20399.11, stdev=121.88, samples=9 00:22:22.377 iops : min= 2528, max= 2576, avg=2549.89, stdev=15.24, samples=9 00:22:22.377 lat (msec) : 4=99.94%, 10=0.06% 00:22:22.377 cpu : usr=92.80%, sys=6.30%, ctx=6, majf=0, minf=0 00:22:22.377 IO depths : 1=7.7%, 2=25.0%, 4=50.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:22.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.377 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.377 issued rwts: total=12744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.377 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:22.377 filename0: (groupid=0, jobs=1): err= 0: pid=98007: Mon Jul 15 18:41:44 2024 00:22:22.377 read: IOPS=2545, BW=19.9MiB/s (20.9MB/s)(99.5MiB/5001msec) 00:22:22.377 slat (usec): min=5, max=926, avg=11.89, stdev= 9.05 00:22:22.377 clat (usec): min=465, max=6487, avg=3098.52, stdev=206.85 00:22:22.377 lat (usec): min=472, max=6507, avg=3110.41, stdev=207.69 00:22:22.377 clat percentiles (usec): 00:22:22.377 | 1.00th=[ 2409], 5.00th=[ 2737], 10.00th=[ 3032], 20.00th=[ 3032], 00:22:22.377 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3097], 00:22:22.377 | 70.00th=[ 3130], 80.00th=[ 3130], 90.00th=[ 3195], 95.00th=[ 3490], 00:22:22.377 | 99.00th=[ 3720], 99.50th=[ 3785], 99.90th=[ 5342], 99.95th=[ 5538], 00:22:22.377 | 99.99th=[ 6456] 00:22:22.377 bw ( KiB/s): min=20184, max=20480, per=24.99%, avg=20376.00, stdev=84.29, samples=9 00:22:22.377 iops : min= 2523, max= 2560, avg=2547.00, stdev=10.54, samples=9 00:22:22.377 lat (usec) : 500=0.01% 00:22:22.377 lat (msec) : 2=0.10%, 4=99.71%, 10=0.18% 00:22:22.377 cpu : usr=92.70%, sys=6.34%, ctx=124, majf=0, minf=9 00:22:22.377 IO depths : 1=3.7%, 2=10.5%, 4=64.5%, 8=21.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:22.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.377 complete : 0=0.0%, 4=89.7%, 8=10.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.377 issued rwts: total=12731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.377 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:22.377 filename1: (groupid=0, jobs=1): err= 0: pid=98008: Mon Jul 15 18:41:44 2024 00:22:22.377 read: IOPS=2550, BW=19.9MiB/s (20.9MB/s)(99.7MiB/5002msec) 00:22:22.377 slat (nsec): min=5916, max=50300, avg=10251.42, stdev=3578.17 00:22:22.377 clat (usec): min=952, max=4249, avg=3092.02, stdev=197.21 00:22:22.377 lat (usec): min=965, max=4259, avg=3102.27, stdev=197.09 00:22:22.377 clat percentiles (usec): 00:22:22.377 | 1.00th=[ 2343], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3064], 00:22:22.377 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3097], 00:22:22.377 | 70.00th=[ 3130], 80.00th=[ 3130], 90.00th=[ 3163], 95.00th=[ 3195], 00:22:22.377 | 99.00th=[ 3851], 99.50th=[ 3851], 99.90th=[ 3916], 99.95th=[ 4047], 00:22:22.377 | 99.99th=[ 4228] 00:22:22.377 bw ( KiB/s): min=20224, max=20608, per=25.05%, avg=20423.11, stdev=128.03, samples=9 00:22:22.377 iops : min= 2528, max= 2576, avg=2552.89, stdev=16.00, samples=9 00:22:22.377 lat (usec) : 1000=0.03% 00:22:22.377 lat (msec) : 2=0.20%, 4=99.70%, 10=0.06% 00:22:22.377 cpu : usr=93.18%, sys=5.88%, ctx=56, majf=0, minf=0 00:22:22.377 IO depths : 1=8.5%, 2=25.0%, 4=50.0%, 8=16.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:22.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.377 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.377 issued rwts: total=12760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.377 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:22.377 filename1: (groupid=0, jobs=1): err= 0: pid=98009: Mon Jul 15 18:41:44 2024 00:22:22.377 read: IOPS=2548, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5001msec) 00:22:22.377 slat (nsec): min=5831, max=49211, avg=10806.53, stdev=4179.72 00:22:22.377 clat (usec): min=1231, max=5683, avg=3088.58, stdev=323.71 00:22:22.377 lat (usec): min=1240, max=5689, avg=3099.39, stdev=323.59 00:22:22.377 clat percentiles (usec): 00:22:22.377 | 1.00th=[ 1975], 5.00th=[ 2966], 10.00th=[ 3032], 20.00th=[ 3032], 00:22:22.377 | 30.00th=[ 3064], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3097], 00:22:22.377 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3163], 95.00th=[ 3195], 00:22:22.377 | 99.00th=[ 4621], 99.50th=[ 5080], 99.90th=[ 5473], 99.95th=[ 5604], 00:22:22.377 | 99.99th=[ 5669] 00:22:22.377 bw ( KiB/s): min=20224, max=20496, per=25.01%, avg=20394.67, stdev=85.79, samples=9 00:22:22.377 iops : min= 2528, max= 2562, avg=2549.33, stdev=10.72, samples=9 00:22:22.377 lat (msec) : 2=1.01%, 4=96.91%, 10=2.08% 00:22:22.377 cpu : usr=92.58%, sys=6.52%, ctx=10, majf=0, minf=9 00:22:22.377 IO depths : 1=6.9%, 2=25.0%, 4=50.0%, 8=18.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:22.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.377 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.377 issued rwts: total=12744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.377 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:22.377 00:22:22.377 Run status group 0 (all jobs): 00:22:22.377 READ: bw=79.6MiB/s (83.5MB/s), 19.9MiB/s-19.9MiB/s (20.9MB/s-20.9MB/s), io=398MiB (418MB), run=5001-5002msec 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.636 00:22:22.636 real 0m23.703s 00:22:22.636 user 2m4.904s 00:22:22.636 sys 0m7.315s 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:22.636 18:41:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:22.636 ************************************ 00:22:22.636 END TEST fio_dif_rand_params 00:22:22.636 ************************************ 00:22:22.636 18:41:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:22.636 18:41:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:22.636 18:41:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:22.636 18:41:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.636 18:41:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:22.636 ************************************ 00:22:22.636 START TEST fio_dif_digest 00:22:22.636 ************************************ 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:22.636 bdev_null0 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.636 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:22.896 [2024-07-15 18:41:45.260512] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.896 { 00:22:22.896 "params": { 00:22:22.896 "name": "Nvme$subsystem", 00:22:22.896 "trtype": "$TEST_TRANSPORT", 00:22:22.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.896 "adrfam": "ipv4", 00:22:22.896 "trsvcid": "$NVMF_PORT", 00:22:22.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.896 "hdgst": ${hdgst:-false}, 00:22:22.896 "ddgst": ${ddgst:-false} 00:22:22.896 }, 00:22:22.896 "method": "bdev_nvme_attach_controller" 00:22:22.896 } 00:22:22.896 EOF 00:22:22.896 )") 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:22.896 "params": { 00:22:22.896 "name": "Nvme0", 00:22:22.896 "trtype": "tcp", 00:22:22.896 "traddr": "10.0.0.2", 00:22:22.896 "adrfam": "ipv4", 00:22:22.896 "trsvcid": "4420", 00:22:22.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:22.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:22.896 "hdgst": true, 00:22:22.896 "ddgst": true 00:22:22.896 }, 00:22:22.896 "method": "bdev_nvme_attach_controller" 00:22:22.896 }' 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:22.896 18:41:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:22.896 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:22.896 ... 00:22:22.896 fio-3.35 00:22:22.896 Starting 3 threads 00:22:35.093 00:22:35.093 filename0: (groupid=0, jobs=1): err= 0: pid=98115: Mon Jul 15 18:41:56 2024 00:22:35.094 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(372MiB/10005msec) 00:22:35.094 slat (nsec): min=6079, max=50441, avg=11463.48, stdev=3242.96 00:22:35.094 clat (usec): min=5781, max=52395, avg=10068.51, stdev=4092.73 00:22:35.094 lat (usec): min=5793, max=52415, avg=10079.98, stdev=4092.87 00:22:35.094 clat percentiles (usec): 00:22:35.094 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9241], 00:22:35.094 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:22:35.094 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10290], 95.00th=[10552], 00:22:35.094 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:22:35.094 | 99.99th=[52167] 00:22:35.094 bw ( KiB/s): min=33024, max=40192, per=36.85%, avg=38067.20, stdev=2166.03, samples=20 00:22:35.094 iops : min= 258, max= 314, avg=297.40, stdev=16.92, samples=20 00:22:35.094 lat (msec) : 10=75.31%, 20=23.68%, 50=0.30%, 100=0.71% 00:22:35.094 cpu : usr=91.59%, sys=7.27%, ctx=21, majf=0, minf=9 00:22:35.094 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.094 issued rwts: total=2977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:35.094 filename0: (groupid=0, jobs=1): err= 0: pid=98116: Mon Jul 15 18:41:56 2024 00:22:35.094 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(364MiB/10007msec) 00:22:35.094 slat (nsec): min=6031, max=37447, avg=10971.22, stdev=3111.75 00:22:35.094 clat (usec): min=5040, max=13084, avg=10283.95, stdev=1181.75 00:22:35.094 lat (usec): min=5049, max=13122, avg=10294.92, stdev=1181.77 00:22:35.094 clat percentiles (usec): 00:22:35.094 | 1.00th=[ 6063], 5.00th=[ 7046], 10.00th=[ 9241], 20.00th=[ 9765], 00:22:35.094 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:22:35.094 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:22:35.094 | 99.00th=[12125], 99.50th=[12387], 99.90th=[13042], 99.95th=[13042], 00:22:35.094 | 99.99th=[13042] 00:22:35.094 bw ( KiB/s): min=35840, max=39936, per=36.08%, avg=37273.60, stdev=1025.35, samples=20 00:22:35.094 iops : min= 280, max= 312, avg=291.20, stdev= 8.01, samples=20 00:22:35.094 lat (msec) : 10=27.48%, 20=72.52% 00:22:35.094 cpu : usr=91.65%, sys=7.23%, ctx=213, majf=0, minf=0 00:22:35.094 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.094 issued rwts: total=2915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:35.094 filename0: (groupid=0, jobs=1): err= 0: pid=98117: Mon Jul 15 18:41:56 2024 00:22:35.094 read: IOPS=220, BW=27.6MiB/s (28.9MB/s)(277MiB/10046msec) 00:22:35.094 slat (nsec): min=6005, max=28597, avg=10244.46, stdev=3030.60 00:22:35.094 clat (usec): min=7446, max=56896, avg=13562.26, stdev=1792.50 00:22:35.094 lat (usec): min=7460, max=56907, avg=13572.50, stdev=1792.63 00:22:35.094 clat percentiles (usec): 00:22:35.094 | 1.00th=[ 8094], 5.00th=[ 9241], 10.00th=[12911], 20.00th=[13304], 00:22:35.094 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[13960], 00:22:35.094 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14484], 95.00th=[14746], 00:22:35.094 | 99.00th=[15008], 99.50th=[15270], 99.90th=[16188], 99.95th=[47973], 00:22:35.094 | 99.99th=[56886] 00:22:35.094 bw ( KiB/s): min=27136, max=30464, per=27.43%, avg=28339.20, stdev=925.26, samples=20 00:22:35.094 iops : min= 212, max= 238, avg=221.40, stdev= 7.23, samples=20 00:22:35.094 lat (msec) : 10=5.60%, 20=94.31%, 50=0.05%, 100=0.05% 00:22:35.094 cpu : usr=92.11%, sys=7.00%, ctx=8, majf=0, minf=9 00:22:35.094 IO depths : 1=14.4%, 2=85.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.094 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:35.094 00:22:35.094 Run status group 0 (all jobs): 00:22:35.094 READ: bw=101MiB/s (106MB/s), 27.6MiB/s-37.2MiB/s (28.9MB/s-39.0MB/s), io=1014MiB (1063MB), run=10005-10046msec 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.094 00:22:35.094 real 0m10.989s 00:22:35.094 user 0m28.224s 00:22:35.094 sys 0m2.445s 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:35.094 ************************************ 00:22:35.094 END TEST fio_dif_digest 00:22:35.094 ************************************ 00:22:35.094 18:41:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:35.094 18:41:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:35.094 18:41:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.094 rmmod nvme_tcp 00:22:35.094 rmmod nvme_fabrics 00:22:35.094 rmmod nvme_keyring 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97345 ']' 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97345 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 97345 ']' 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 97345 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97345 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:35.094 killing process with pid 97345 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97345' 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@967 -- # kill 97345 00:22:35.094 18:41:56 nvmf_dif -- common/autotest_common.sh@972 -- # wait 97345 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:22:35.094 18:41:56 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:35.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:35.094 Waiting for block devices as requested 00:22:35.094 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:35.094 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:35.094 18:41:57 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.094 18:41:57 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.094 18:41:57 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.094 18:41:57 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.094 18:41:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.094 18:41:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:35.094 18:41:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.094 18:41:57 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:35.094 00:22:35.094 real 1m0.416s 00:22:35.094 user 3m48.051s 00:22:35.094 sys 0m20.039s 00:22:35.094 18:41:57 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:35.094 18:41:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:35.094 ************************************ 00:22:35.094 END TEST nvmf_dif 00:22:35.094 ************************************ 00:22:35.094 18:41:57 -- common/autotest_common.sh@1142 -- # return 0 00:22:35.094 18:41:57 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:35.094 18:41:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:35.094 18:41:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.094 18:41:57 -- common/autotest_common.sh@10 -- # set +x 00:22:35.094 ************************************ 00:22:35.094 START TEST nvmf_abort_qd_sizes 00:22:35.094 ************************************ 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:35.094 * Looking for test storage... 00:22:35.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.094 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:35.095 Cannot find device "nvmf_tgt_br" 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:35.095 Cannot find device "nvmf_tgt_br2" 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:35.095 Cannot find device "nvmf_tgt_br" 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:35.095 Cannot find device "nvmf_tgt_br2" 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:22:35.095 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:35.353 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:35.610 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:35.610 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:35.610 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:35.610 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:35.610 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:35.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:22:35.610 00:22:35.610 --- 10.0.0.2 ping statistics --- 00:22:35.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.611 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:35.611 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:35.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:35.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.137 ms 00:22:35.611 00:22:35.611 --- 10.0.0.3 ping statistics --- 00:22:35.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.611 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:22:35.611 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:35.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:22:35.611 00:22:35.611 --- 10.0.0.1 ping statistics --- 00:22:35.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.611 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:35.611 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.611 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:22:35.611 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:35.611 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:36.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:36.546 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:36.546 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:36.546 18:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.546 18:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:36.546 18:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=98713 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 98713 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 98713 ']' 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.547 18:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:36.805 [2024-07-15 18:41:59.183051] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:22:36.805 [2024-07-15 18:41:59.183127] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.805 [2024-07-15 18:41:59.323416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.064 [2024-07-15 18:41:59.420861] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.064 [2024-07-15 18:41:59.420912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.064 [2024-07-15 18:41:59.420922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.064 [2024-07-15 18:41:59.420930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.064 [2024-07-15 18:41:59.420936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.064 [2024-07-15 18:41:59.421155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.064 [2024-07-15 18:41:59.421426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.064 [2024-07-15 18:41:59.422101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.064 [2024-07-15 18:41:59.422103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.632 18:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:37.632 ************************************ 00:22:37.632 START TEST spdk_target_abort 00:22:37.632 ************************************ 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:37.632 spdk_targetn1 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:37.632 [2024-07-15 18:42:00.226736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.632 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:37.928 [2024-07-15 18:42:00.254850] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:37.928 18:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:41.225 Initializing NVMe Controllers 00:22:41.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:41.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:41.225 Initialization complete. Launching workers. 00:22:41.225 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14705, failed: 0 00:22:41.225 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1130, failed to submit 13575 00:22:41.225 success 766, unsuccess 364, failed 0 00:22:41.225 18:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:41.225 18:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:44.612 Initializing NVMe Controllers 00:22:44.612 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:44.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:44.612 Initialization complete. Launching workers. 00:22:44.612 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5978, failed: 0 00:22:44.612 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1258, failed to submit 4720 00:22:44.612 success 242, unsuccess 1016, failed 0 00:22:44.612 18:42:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:44.612 18:42:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:48.002 Initializing NVMe Controllers 00:22:48.002 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:48.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:48.002 Initialization complete. Launching workers. 00:22:48.002 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35722, failed: 0 00:22:48.002 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2810, failed to submit 32912 00:22:48.002 success 536, unsuccess 2274, failed 0 00:22:48.002 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:48.002 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.002 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:48.002 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.002 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:48.002 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.002 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98713 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 98713 ']' 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 98713 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98713 00:22:48.264 killing process with pid 98713 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98713' 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 98713 00:22:48.264 18:42:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 98713 00:22:48.523 ************************************ 00:22:48.523 END TEST spdk_target_abort 00:22:48.523 ************************************ 00:22:48.523 00:22:48.523 real 0m10.882s 00:22:48.523 user 0m43.293s 00:22:48.523 sys 0m2.262s 00:22:48.523 18:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:48.523 18:42:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:48.523 18:42:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:22:48.523 18:42:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:48.523 18:42:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:48.523 18:42:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.524 18:42:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:48.524 ************************************ 00:22:48.524 START TEST kernel_target_abort 00:22:48.524 ************************************ 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:48.524 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:48.783 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:48.783 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:49.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:49.300 Waiting for block devices as requested 00:22:49.300 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:49.300 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:49.559 No valid GPT data, bailing 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:22:49.559 18:42:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:49.559 No valid GPT data, bailing 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:49.559 No valid GPT data, bailing 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:49.559 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:49.818 No valid GPT data, bailing 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 --hostid=ee8aff67-4252-4979-91cf-1a72f40d57b6 -a 10.0.0.1 -t tcp -s 4420 00:22:49.818 00:22:49.818 Discovery Log Number of Records 2, Generation counter 2 00:22:49.818 =====Discovery Log Entry 0====== 00:22:49.818 trtype: tcp 00:22:49.818 adrfam: ipv4 00:22:49.818 subtype: current discovery subsystem 00:22:49.818 treq: not specified, sq flow control disable supported 00:22:49.818 portid: 1 00:22:49.818 trsvcid: 4420 00:22:49.818 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:49.818 traddr: 10.0.0.1 00:22:49.818 eflags: none 00:22:49.818 sectype: none 00:22:49.818 =====Discovery Log Entry 1====== 00:22:49.818 trtype: tcp 00:22:49.818 adrfam: ipv4 00:22:49.818 subtype: nvme subsystem 00:22:49.818 treq: not specified, sq flow control disable supported 00:22:49.818 portid: 1 00:22:49.818 trsvcid: 4420 00:22:49.818 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:49.818 traddr: 10.0.0.1 00:22:49.818 eflags: none 00:22:49.818 sectype: none 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:49.818 18:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:53.149 Initializing NVMe Controllers 00:22:53.149 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:53.149 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:53.149 Initialization complete. Launching workers. 00:22:53.149 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39444, failed: 0 00:22:53.150 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39444, failed to submit 0 00:22:53.150 success 0, unsuccess 39444, failed 0 00:22:53.150 18:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:53.150 18:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:56.490 Initializing NVMe Controllers 00:22:56.490 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:56.490 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:56.490 Initialization complete. Launching workers. 00:22:56.490 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90198, failed: 0 00:22:56.490 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41645, failed to submit 48553 00:22:56.490 success 0, unsuccess 41645, failed 0 00:22:56.490 18:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:56.490 18:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:59.777 Initializing NVMe Controllers 00:22:59.777 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:59.777 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:59.777 Initialization complete. Launching workers. 00:22:59.777 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103363, failed: 0 00:22:59.777 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25848, failed to submit 77515 00:22:59.777 success 0, unsuccess 25848, failed 0 00:22:59.777 18:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:59.777 18:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:59.777 18:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:22:59.777 18:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:59.777 18:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:59.777 18:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:59.777 18:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:59.777 18:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:59.777 18:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:59.777 18:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:00.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:02.874 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:02.874 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:03.133 00:23:03.133 real 0m14.449s 00:23:03.133 user 0m6.426s 00:23:03.133 sys 0m5.468s 00:23:03.133 18:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:03.133 18:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:03.133 ************************************ 00:23:03.133 END TEST kernel_target_abort 00:23:03.133 ************************************ 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.133 rmmod nvme_tcp 00:23:03.133 rmmod nvme_fabrics 00:23:03.133 rmmod nvme_keyring 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 98713 ']' 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 98713 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 98713 ']' 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 98713 00:23:03.133 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (98713) - No such process 00:23:03.133 Process with pid 98713 is not found 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 98713 is not found' 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:03.133 18:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:03.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:03.698 Waiting for block devices as requested 00:23:03.956 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:03.956 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:03.956 18:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.956 18:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.956 18:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.956 18:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.956 18:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.956 18:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:03.956 18:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.956 18:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:03.956 00:23:03.956 real 0m29.121s 00:23:03.956 user 0m50.973s 00:23:03.956 sys 0m9.599s 00:23:03.956 18:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:03.956 18:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:03.956 ************************************ 00:23:03.956 END TEST nvmf_abort_qd_sizes 00:23:03.956 ************************************ 00:23:04.215 18:42:26 -- common/autotest_common.sh@1142 -- # return 0 00:23:04.215 18:42:26 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:04.215 18:42:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:04.215 18:42:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.215 18:42:26 -- common/autotest_common.sh@10 -- # set +x 00:23:04.215 ************************************ 00:23:04.215 START TEST keyring_file 00:23:04.215 ************************************ 00:23:04.215 18:42:26 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:04.215 * Looking for test storage... 00:23:04.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:04.215 18:42:26 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:04.215 18:42:26 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.215 18:42:26 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.215 18:42:26 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.215 18:42:26 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.215 18:42:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.215 18:42:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.215 18:42:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.215 18:42:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:04.215 18:42:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@47 -- # : 0 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.215 18:42:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:04.215 18:42:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:04.215 18:42:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:04.215 18:42:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:04.215 18:42:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:04.215 18:42:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:04.215 18:42:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:04.215 18:42:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:04.215 18:42:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:04.215 18:42:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:04.215 18:42:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:04.215 18:42:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:04.215 18:42:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OVqiZcmuQB 00:23:04.215 18:42:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:04.215 18:42:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OVqiZcmuQB 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OVqiZcmuQB 00:23:04.474 18:42:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.OVqiZcmuQB 00:23:04.474 18:42:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SRf3AMuyvT 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:04.474 18:42:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:04.474 18:42:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:04.474 18:42:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:04.474 18:42:26 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:04.474 18:42:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:04.474 18:42:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SRf3AMuyvT 00:23:04.474 18:42:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SRf3AMuyvT 00:23:04.474 18:42:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.SRf3AMuyvT 00:23:04.474 18:42:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=99607 00:23:04.474 18:42:26 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:04.474 18:42:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99607 00:23:04.474 18:42:26 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99607 ']' 00:23:04.474 18:42:26 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.474 18:42:26 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.474 18:42:26 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.474 18:42:26 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.474 18:42:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:04.474 [2024-07-15 18:42:27.000291] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:23:04.474 [2024-07-15 18:42:27.000368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99607 ] 00:23:04.732 [2024-07-15 18:42:27.141650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.732 [2024-07-15 18:42:27.224486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:05.300 18:42:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:05.300 [2024-07-15 18:42:27.847138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.300 null0 00:23:05.300 [2024-07-15 18:42:27.879056] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.300 [2024-07-15 18:42:27.879363] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:05.300 [2024-07-15 18:42:27.891039] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.300 18:42:27 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.300 18:42:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:05.300 [2024-07-15 18:42:27.907007] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:05.300 request: 00:23:05.300 2024/07/15 18:42:27 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:23:05.300 { 00:23:05.559 "method": "nvmf_subsystem_add_listener", 00:23:05.559 "params": { 00:23:05.559 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:05.559 "secure_channel": false, 00:23:05.559 "listen_address": { 00:23:05.559 "trtype": "tcp", 00:23:05.559 "traddr": "127.0.0.1", 00:23:05.559 "trsvcid": "4420" 00:23:05.559 } 00:23:05.559 } 00:23:05.559 } 00:23:05.559 Got JSON-RPC error response 00:23:05.559 GoRPCClient: error on JSON-RPC call 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:05.559 18:42:27 keyring_file -- keyring/file.sh@46 -- # bperfpid=99642 00:23:05.559 18:42:27 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:05.559 18:42:27 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99642 /var/tmp/bperf.sock 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99642 ']' 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.559 18:42:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:05.559 [2024-07-15 18:42:27.984693] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:23:05.559 [2024-07-15 18:42:27.984791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99642 ] 00:23:05.559 [2024-07-15 18:42:28.129443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.818 [2024-07-15 18:42:28.213437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.384 18:42:28 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.384 18:42:28 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:06.384 18:42:28 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OVqiZcmuQB 00:23:06.384 18:42:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OVqiZcmuQB 00:23:06.643 18:42:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SRf3AMuyvT 00:23:06.643 18:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SRf3AMuyvT 00:23:06.902 18:42:29 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:23:06.902 18:42:29 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:23:06.902 18:42:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:06.902 18:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:06.902 18:42:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:07.160 18:42:29 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.OVqiZcmuQB == \/\t\m\p\/\t\m\p\.\O\V\q\i\Z\c\m\u\Q\B ]] 00:23:07.160 18:42:29 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:23:07.160 18:42:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:07.160 18:42:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:07.160 18:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:07.160 18:42:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:07.160 18:42:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.SRf3AMuyvT == \/\t\m\p\/\t\m\p\.\S\R\f\3\A\M\u\y\v\T ]] 00:23:07.160 18:42:29 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:23:07.160 18:42:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:07.160 18:42:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:07.419 18:42:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:07.419 18:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:07.419 18:42:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:07.419 18:42:29 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:07.419 18:42:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:23:07.419 18:42:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:07.419 18:42:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:07.419 18:42:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:07.419 18:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:07.419 18:42:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:07.677 18:42:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:07.677 18:42:30 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:07.677 18:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:07.936 [2024-07-15 18:42:30.369505] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.936 nvme0n1 00:23:07.936 18:42:30 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:23:07.936 18:42:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:07.936 18:42:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:07.936 18:42:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:07.936 18:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:07.936 18:42:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:08.194 18:42:30 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:08.194 18:42:30 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:23:08.194 18:42:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:08.194 18:42:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:08.194 18:42:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:08.194 18:42:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:08.194 18:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.479 18:42:30 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:08.479 18:42:30 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:08.479 Running I/O for 1 seconds... 00:23:09.416 00:23:09.416 Latency(us) 00:23:09.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.416 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:09.416 nvme0n1 : 1.00 15990.17 62.46 0.00 0.00 7987.37 3868.99 16107.64 00:23:09.416 =================================================================================================================== 00:23:09.416 Total : 15990.17 62.46 0.00 0.00 7987.37 3868.99 16107.64 00:23:09.416 0 00:23:09.416 18:42:31 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:09.416 18:42:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:09.674 18:42:32 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:23:09.674 18:42:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:09.674 18:42:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:09.674 18:42:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:09.674 18:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:09.674 18:42:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:09.933 18:42:32 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:09.933 18:42:32 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:23:09.933 18:42:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:09.933 18:42:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:09.933 18:42:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:09.933 18:42:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:09.933 18:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:10.191 18:42:32 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:10.191 18:42:32 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:10.191 18:42:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:10.191 18:42:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:10.191 18:42:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:10.191 18:42:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.191 18:42:32 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:10.191 18:42:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.191 18:42:32 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:10.191 18:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:10.449 [2024-07-15 18:42:32.804661] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.449 [2024-07-15 18:42:32.804917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c79f30 (107): Transport endpoint is not connected 00:23:10.449 [2024-07-15 18:42:32.805905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c79f30 (9): Bad file descriptor 00:23:10.449 [2024-07-15 18:42:32.806901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:10.449 [2024-07-15 18:42:32.807124] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:10.449 [2024-07-15 18:42:32.807207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:10.449 2024/07/15 18:42:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:10.449 request: 00:23:10.449 { 00:23:10.449 "method": "bdev_nvme_attach_controller", 00:23:10.449 "params": { 00:23:10.449 "name": "nvme0", 00:23:10.449 "trtype": "tcp", 00:23:10.449 "traddr": "127.0.0.1", 00:23:10.449 "adrfam": "ipv4", 00:23:10.449 "trsvcid": "4420", 00:23:10.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:10.449 "prchk_reftag": false, 00:23:10.449 "prchk_guard": false, 00:23:10.449 "hdgst": false, 00:23:10.449 "ddgst": false, 00:23:10.449 "psk": "key1" 00:23:10.449 } 00:23:10.449 } 00:23:10.449 Got JSON-RPC error response 00:23:10.449 GoRPCClient: error on JSON-RPC call 00:23:10.449 18:42:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:10.449 18:42:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:10.449 18:42:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:10.449 18:42:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:10.449 18:42:32 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:23:10.449 18:42:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:10.449 18:42:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:10.449 18:42:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:10.449 18:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:10.449 18:42:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:10.449 18:42:33 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:10.449 18:42:33 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:23:10.449 18:42:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:10.449 18:42:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:10.449 18:42:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:10.449 18:42:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:10.449 18:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:10.707 18:42:33 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:10.707 18:42:33 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:10.707 18:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:10.965 18:42:33 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:10.965 18:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:11.223 18:42:33 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:11.223 18:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.223 18:42:33 keyring_file -- keyring/file.sh@77 -- # jq length 00:23:11.481 18:42:33 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:11.481 18:42:33 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.OVqiZcmuQB 00:23:11.481 18:42:33 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.OVqiZcmuQB 00:23:11.481 18:42:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:11.481 18:42:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.OVqiZcmuQB 00:23:11.481 18:42:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:11.481 18:42:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:11.481 18:42:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:11.481 18:42:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:11.481 18:42:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OVqiZcmuQB 00:23:11.481 18:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OVqiZcmuQB 00:23:11.481 [2024-07-15 18:42:34.039383] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OVqiZcmuQB': 0100660 00:23:11.481 [2024-07-15 18:42:34.039419] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:11.481 2024/07/15 18:42:34 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.OVqiZcmuQB], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:23:11.481 request: 00:23:11.481 { 00:23:11.481 "method": "keyring_file_add_key", 00:23:11.481 "params": { 00:23:11.481 "name": "key0", 00:23:11.481 "path": "/tmp/tmp.OVqiZcmuQB" 00:23:11.481 } 00:23:11.481 } 00:23:11.481 Got JSON-RPC error response 00:23:11.481 GoRPCClient: error on JSON-RPC call 00:23:11.481 18:42:34 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:11.481 18:42:34 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:11.481 18:42:34 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:11.481 18:42:34 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:11.481 18:42:34 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.OVqiZcmuQB 00:23:11.481 18:42:34 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OVqiZcmuQB 00:23:11.481 18:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OVqiZcmuQB 00:23:11.739 18:42:34 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.OVqiZcmuQB 00:23:11.739 18:42:34 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:23:11.739 18:42:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:11.739 18:42:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:11.739 18:42:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:11.739 18:42:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:11.739 18:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.999 18:42:34 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:11.999 18:42:34 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:11.999 18:42:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:11.999 18:42:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:11.999 18:42:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:11.999 18:42:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:11.999 18:42:34 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:11.999 18:42:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:11.999 18:42:34 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:11.999 18:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:12.258 [2024-07-15 18:42:34.682467] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.OVqiZcmuQB': No such file or directory 00:23:12.258 [2024-07-15 18:42:34.682503] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:12.258 [2024-07-15 18:42:34.682526] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:12.258 [2024-07-15 18:42:34.682535] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:12.258 [2024-07-15 18:42:34.682544] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:12.258 2024/07/15 18:42:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:23:12.258 request: 00:23:12.258 { 00:23:12.258 "method": "bdev_nvme_attach_controller", 00:23:12.258 "params": { 00:23:12.258 "name": "nvme0", 00:23:12.258 "trtype": "tcp", 00:23:12.258 "traddr": "127.0.0.1", 00:23:12.258 "adrfam": "ipv4", 00:23:12.258 "trsvcid": "4420", 00:23:12.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:12.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:12.258 "prchk_reftag": false, 00:23:12.258 "prchk_guard": false, 00:23:12.258 "hdgst": false, 00:23:12.258 "ddgst": false, 00:23:12.258 "psk": "key0" 00:23:12.258 } 00:23:12.258 } 00:23:12.258 Got JSON-RPC error response 00:23:12.258 GoRPCClient: error on JSON-RPC call 00:23:12.258 18:42:34 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:12.258 18:42:34 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:12.258 18:42:34 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:12.258 18:42:34 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:12.258 18:42:34 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:12.258 18:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:12.517 18:42:34 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:12.517 18:42:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:12.517 18:42:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:12.517 18:42:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:12.517 18:42:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:12.517 18:42:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:12.517 18:42:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5wh6fMFYts 00:23:12.517 18:42:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:12.517 18:42:34 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:12.517 18:42:34 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:12.517 18:42:34 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:12.517 18:42:34 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:12.517 18:42:34 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:12.517 18:42:34 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:12.517 18:42:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5wh6fMFYts 00:23:12.517 18:42:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5wh6fMFYts 00:23:12.517 18:42:34 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.5wh6fMFYts 00:23:12.517 18:42:34 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5wh6fMFYts 00:23:12.517 18:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5wh6fMFYts 00:23:12.776 18:42:35 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:12.776 18:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:13.035 nvme0n1 00:23:13.035 18:42:35 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:23:13.035 18:42:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:13.035 18:42:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:13.035 18:42:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:13.035 18:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:13.035 18:42:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:13.293 18:42:35 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:13.293 18:42:35 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:13.293 18:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:13.551 18:42:35 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:23:13.551 18:42:35 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:23:13.551 18:42:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:13.551 18:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:13.551 18:42:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:13.551 18:42:36 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:13.551 18:42:36 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:23:13.551 18:42:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:13.551 18:42:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:13.551 18:42:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:13.551 18:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:13.551 18:42:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:13.808 18:42:36 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:13.808 18:42:36 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:13.808 18:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:14.065 18:42:36 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:14.065 18:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:14.065 18:42:36 keyring_file -- keyring/file.sh@104 -- # jq length 00:23:14.322 18:42:36 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:14.322 18:42:36 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5wh6fMFYts 00:23:14.322 18:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5wh6fMFYts 00:23:14.579 18:42:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SRf3AMuyvT 00:23:14.579 18:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SRf3AMuyvT 00:23:14.579 18:42:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:14.579 18:42:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:14.836 nvme0n1 00:23:15.094 18:42:37 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:15.094 18:42:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:15.352 18:42:37 keyring_file -- keyring/file.sh@112 -- # config='{ 00:23:15.352 "subsystems": [ 00:23:15.352 { 00:23:15.352 "subsystem": "keyring", 00:23:15.352 "config": [ 00:23:15.352 { 00:23:15.352 "method": "keyring_file_add_key", 00:23:15.352 "params": { 00:23:15.352 "name": "key0", 00:23:15.352 "path": "/tmp/tmp.5wh6fMFYts" 00:23:15.352 } 00:23:15.352 }, 00:23:15.352 { 00:23:15.352 "method": "keyring_file_add_key", 00:23:15.352 "params": { 00:23:15.352 "name": "key1", 00:23:15.352 "path": "/tmp/tmp.SRf3AMuyvT" 00:23:15.352 } 00:23:15.352 } 00:23:15.352 ] 00:23:15.352 }, 00:23:15.352 { 00:23:15.352 "subsystem": "iobuf", 00:23:15.352 "config": [ 00:23:15.352 { 00:23:15.352 "method": "iobuf_set_options", 00:23:15.352 "params": { 00:23:15.352 "large_bufsize": 135168, 00:23:15.352 "large_pool_count": 1024, 00:23:15.352 "small_bufsize": 8192, 00:23:15.352 "small_pool_count": 8192 00:23:15.352 } 00:23:15.352 } 00:23:15.352 ] 00:23:15.352 }, 00:23:15.352 { 00:23:15.352 "subsystem": "sock", 00:23:15.352 "config": [ 00:23:15.352 { 00:23:15.352 "method": "sock_set_default_impl", 00:23:15.352 "params": { 00:23:15.352 "impl_name": "posix" 00:23:15.352 } 00:23:15.352 }, 00:23:15.352 { 00:23:15.352 "method": "sock_impl_set_options", 00:23:15.352 "params": { 00:23:15.352 "enable_ktls": false, 00:23:15.352 "enable_placement_id": 0, 00:23:15.352 "enable_quickack": false, 00:23:15.352 "enable_recv_pipe": true, 00:23:15.352 "enable_zerocopy_send_client": false, 00:23:15.352 "enable_zerocopy_send_server": true, 00:23:15.352 "impl_name": "ssl", 00:23:15.352 "recv_buf_size": 4096, 00:23:15.352 "send_buf_size": 4096, 00:23:15.352 "tls_version": 0, 00:23:15.352 "zerocopy_threshold": 0 00:23:15.352 } 00:23:15.352 }, 00:23:15.352 { 00:23:15.352 "method": "sock_impl_set_options", 00:23:15.352 "params": { 00:23:15.352 "enable_ktls": false, 00:23:15.352 "enable_placement_id": 0, 00:23:15.352 "enable_quickack": false, 00:23:15.352 "enable_recv_pipe": true, 00:23:15.352 "enable_zerocopy_send_client": false, 00:23:15.352 "enable_zerocopy_send_server": true, 00:23:15.352 "impl_name": "posix", 00:23:15.352 "recv_buf_size": 2097152, 00:23:15.352 "send_buf_size": 2097152, 00:23:15.352 "tls_version": 0, 00:23:15.352 "zerocopy_threshold": 0 00:23:15.352 } 00:23:15.352 } 00:23:15.352 ] 00:23:15.352 }, 00:23:15.352 { 00:23:15.352 "subsystem": "vmd", 00:23:15.352 "config": [] 00:23:15.352 }, 00:23:15.352 { 00:23:15.352 "subsystem": "accel", 00:23:15.352 "config": [ 00:23:15.352 { 00:23:15.352 "method": "accel_set_options", 00:23:15.352 "params": { 00:23:15.352 "buf_count": 2048, 00:23:15.352 "large_cache_size": 16, 00:23:15.352 "sequence_count": 2048, 00:23:15.352 "small_cache_size": 128, 00:23:15.352 "task_count": 2048 00:23:15.352 } 00:23:15.352 } 00:23:15.352 ] 00:23:15.352 }, 00:23:15.352 { 00:23:15.352 "subsystem": "bdev", 00:23:15.352 "config": [ 00:23:15.352 { 00:23:15.352 "method": "bdev_set_options", 00:23:15.352 "params": { 00:23:15.352 "bdev_auto_examine": true, 00:23:15.352 "bdev_io_cache_size": 256, 00:23:15.352 "bdev_io_pool_size": 65535, 00:23:15.352 "iobuf_large_cache_size": 16, 00:23:15.352 "iobuf_small_cache_size": 128 00:23:15.352 } 00:23:15.352 }, 00:23:15.353 { 00:23:15.353 "method": "bdev_raid_set_options", 00:23:15.353 "params": { 00:23:15.353 "process_window_size_kb": 1024 00:23:15.353 } 00:23:15.353 }, 00:23:15.353 { 00:23:15.353 "method": "bdev_iscsi_set_options", 00:23:15.353 "params": { 00:23:15.353 "timeout_sec": 30 00:23:15.353 } 00:23:15.353 }, 00:23:15.353 { 00:23:15.353 "method": "bdev_nvme_set_options", 00:23:15.353 "params": { 00:23:15.353 "action_on_timeout": "none", 00:23:15.353 "allow_accel_sequence": false, 00:23:15.353 "arbitration_burst": 0, 00:23:15.353 "bdev_retry_count": 3, 00:23:15.353 "ctrlr_loss_timeout_sec": 0, 00:23:15.353 "delay_cmd_submit": true, 00:23:15.353 "dhchap_dhgroups": [ 00:23:15.353 "null", 00:23:15.353 "ffdhe2048", 00:23:15.353 "ffdhe3072", 00:23:15.353 "ffdhe4096", 00:23:15.353 "ffdhe6144", 00:23:15.353 "ffdhe8192" 00:23:15.353 ], 00:23:15.353 "dhchap_digests": [ 00:23:15.353 "sha256", 00:23:15.353 "sha384", 00:23:15.353 "sha512" 00:23:15.353 ], 00:23:15.353 "disable_auto_failback": false, 00:23:15.353 "fast_io_fail_timeout_sec": 0, 00:23:15.353 "generate_uuids": false, 00:23:15.353 "high_priority_weight": 0, 00:23:15.353 "io_path_stat": false, 00:23:15.353 "io_queue_requests": 512, 00:23:15.353 "keep_alive_timeout_ms": 10000, 00:23:15.353 "low_priority_weight": 0, 00:23:15.353 "medium_priority_weight": 0, 00:23:15.353 "nvme_adminq_poll_period_us": 10000, 00:23:15.353 "nvme_error_stat": false, 00:23:15.353 "nvme_ioq_poll_period_us": 0, 00:23:15.353 "rdma_cm_event_timeout_ms": 0, 00:23:15.353 "rdma_max_cq_size": 0, 00:23:15.353 "rdma_srq_size": 0, 00:23:15.353 "reconnect_delay_sec": 0, 00:23:15.353 "timeout_admin_us": 0, 00:23:15.353 "timeout_us": 0, 00:23:15.353 "transport_ack_timeout": 0, 00:23:15.353 "transport_retry_count": 4, 00:23:15.353 "transport_tos": 0 00:23:15.353 } 00:23:15.353 }, 00:23:15.353 { 00:23:15.353 "method": "bdev_nvme_attach_controller", 00:23:15.353 "params": { 00:23:15.353 "adrfam": "IPv4", 00:23:15.353 "ctrlr_loss_timeout_sec": 0, 00:23:15.353 "ddgst": false, 00:23:15.353 "fast_io_fail_timeout_sec": 0, 00:23:15.353 "hdgst": false, 00:23:15.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:15.353 "name": "nvme0", 00:23:15.353 "prchk_guard": false, 00:23:15.353 "prchk_reftag": false, 00:23:15.353 "psk": "key0", 00:23:15.353 "reconnect_delay_sec": 0, 00:23:15.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.353 "traddr": "127.0.0.1", 00:23:15.353 "trsvcid": "4420", 00:23:15.353 "trtype": "TCP" 00:23:15.353 } 00:23:15.353 }, 00:23:15.353 { 00:23:15.353 "method": "bdev_nvme_set_hotplug", 00:23:15.353 "params": { 00:23:15.353 "enable": false, 00:23:15.353 "period_us": 100000 00:23:15.353 } 00:23:15.353 }, 00:23:15.353 { 00:23:15.353 "method": "bdev_wait_for_examine" 00:23:15.353 } 00:23:15.353 ] 00:23:15.353 }, 00:23:15.353 { 00:23:15.353 "subsystem": "nbd", 00:23:15.353 "config": [] 00:23:15.353 } 00:23:15.353 ] 00:23:15.353 }' 00:23:15.353 18:42:37 keyring_file -- keyring/file.sh@114 -- # killprocess 99642 00:23:15.353 18:42:37 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99642 ']' 00:23:15.353 18:42:37 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99642 00:23:15.353 18:42:37 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:15.353 18:42:37 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.353 18:42:37 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99642 00:23:15.353 18:42:37 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:15.353 18:42:37 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:15.353 killing process with pid 99642 00:23:15.353 18:42:37 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99642' 00:23:15.353 Received shutdown signal, test time was about 1.000000 seconds 00:23:15.353 00:23:15.353 Latency(us) 00:23:15.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.353 =================================================================================================================== 00:23:15.353 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.353 18:42:37 keyring_file -- common/autotest_common.sh@967 -- # kill 99642 00:23:15.353 18:42:37 keyring_file -- common/autotest_common.sh@972 -- # wait 99642 00:23:15.611 18:42:37 keyring_file -- keyring/file.sh@117 -- # bperfpid=100099 00:23:15.611 18:42:37 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100099 /var/tmp/bperf.sock 00:23:15.611 18:42:37 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100099 ']' 00:23:15.611 18:42:37 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:15.611 18:42:37 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.611 18:42:37 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:15.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:15.611 18:42:37 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:15.611 18:42:37 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.611 18:42:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:15.611 18:42:37 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:23:15.611 "subsystems": [ 00:23:15.611 { 00:23:15.611 "subsystem": "keyring", 00:23:15.611 "config": [ 00:23:15.611 { 00:23:15.611 "method": "keyring_file_add_key", 00:23:15.611 "params": { 00:23:15.611 "name": "key0", 00:23:15.611 "path": "/tmp/tmp.5wh6fMFYts" 00:23:15.611 } 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "method": "keyring_file_add_key", 00:23:15.611 "params": { 00:23:15.611 "name": "key1", 00:23:15.611 "path": "/tmp/tmp.SRf3AMuyvT" 00:23:15.611 } 00:23:15.611 } 00:23:15.611 ] 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "subsystem": "iobuf", 00:23:15.611 "config": [ 00:23:15.611 { 00:23:15.611 "method": "iobuf_set_options", 00:23:15.611 "params": { 00:23:15.611 "large_bufsize": 135168, 00:23:15.611 "large_pool_count": 1024, 00:23:15.611 "small_bufsize": 8192, 00:23:15.611 "small_pool_count": 8192 00:23:15.611 } 00:23:15.611 } 00:23:15.611 ] 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "subsystem": "sock", 00:23:15.611 "config": [ 00:23:15.611 { 00:23:15.611 "method": "sock_set_default_impl", 00:23:15.611 "params": { 00:23:15.611 "impl_name": "posix" 00:23:15.611 } 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "method": "sock_impl_set_options", 00:23:15.611 "params": { 00:23:15.611 "enable_ktls": false, 00:23:15.611 "enable_placement_id": 0, 00:23:15.611 "enable_quickack": false, 00:23:15.611 "enable_recv_pipe": true, 00:23:15.611 "enable_zerocopy_send_client": false, 00:23:15.611 "enable_zerocopy_send_server": true, 00:23:15.611 "impl_name": "ssl", 00:23:15.611 "recv_buf_size": 4096, 00:23:15.611 "send_buf_size": 4096, 00:23:15.611 "tls_version": 0, 00:23:15.611 "zerocopy_threshold": 0 00:23:15.611 } 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "method": "sock_impl_set_options", 00:23:15.611 "params": { 00:23:15.611 "enable_ktls": false, 00:23:15.611 "enable_placement_id": 0, 00:23:15.611 "enable_quickack": false, 00:23:15.611 "enable_recv_pipe": true, 00:23:15.611 "enable_zerocopy_send_client": false, 00:23:15.611 "enable_zerocopy_send_server": true, 00:23:15.611 "impl_name": "posix", 00:23:15.611 "recv_buf_size": 2097152, 00:23:15.611 "send_buf_size": 2097152, 00:23:15.611 "tls_version": 0, 00:23:15.611 "zerocopy_threshold": 0 00:23:15.611 } 00:23:15.611 } 00:23:15.611 ] 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "subsystem": "vmd", 00:23:15.611 "config": [] 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "subsystem": "accel", 00:23:15.611 "config": [ 00:23:15.611 { 00:23:15.611 "method": "accel_set_options", 00:23:15.611 "params": { 00:23:15.611 "buf_count": 2048, 00:23:15.611 "large_cache_size": 16, 00:23:15.611 "sequence_count": 2048, 00:23:15.611 "small_cache_size": 128, 00:23:15.611 "task_count": 2048 00:23:15.611 } 00:23:15.611 } 00:23:15.611 ] 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "subsystem": "bdev", 00:23:15.611 "config": [ 00:23:15.611 { 00:23:15.611 "method": "bdev_set_options", 00:23:15.611 "params": { 00:23:15.611 "bdev_auto_examine": true, 00:23:15.611 "bdev_io_cache_size": 256, 00:23:15.611 "bdev_io_pool_size": 65535, 00:23:15.611 "iobuf_large_cache_size": 16, 00:23:15.611 "iobuf_small_cache_size": 128 00:23:15.611 } 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "method": "bdev_raid_set_options", 00:23:15.611 "params": { 00:23:15.611 "process_window_size_kb": 1024 00:23:15.611 } 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "method": "bdev_iscsi_set_options", 00:23:15.611 "params": { 00:23:15.611 "timeout_sec": 30 00:23:15.611 } 00:23:15.611 }, 00:23:15.611 { 00:23:15.611 "method": "bdev_nvme_set_options", 00:23:15.611 "params": { 00:23:15.611 "action_on_timeout": "none", 00:23:15.611 "allow_accel_sequence": false, 00:23:15.611 "arbitration_burst": 0, 00:23:15.611 "bdev_retry_count": 3, 00:23:15.611 "ctrlr_loss_timeout_sec": 0, 00:23:15.611 "delay_cmd_submit": true, 00:23:15.611 "dhchap_dhgroups": [ 00:23:15.611 "null", 00:23:15.611 "ffdhe2048", 00:23:15.611 "ffdhe3072", 00:23:15.611 "ffdhe4096", 00:23:15.611 "ffdhe6144", 00:23:15.611 "ffdhe8192" 00:23:15.611 ], 00:23:15.611 "dhchap_digests": [ 00:23:15.611 "sha256", 00:23:15.611 "sha384", 00:23:15.611 "sha512" 00:23:15.611 ], 00:23:15.611 "disable_auto_failback": false, 00:23:15.611 "fast_io_fail_timeout_sec": 0, 00:23:15.611 "generate_uuids": false, 00:23:15.611 "high_priority_weight": 0, 00:23:15.612 "io_path_stat": false, 00:23:15.612 "io_queue_requests": 512, 00:23:15.612 "keep_alive_timeout_ms": 10000, 00:23:15.612 "low_priority_weight": 0, 00:23:15.612 "medium_priority_weight": 0, 00:23:15.612 "nvme_adminq_poll_period_us": 10000, 00:23:15.612 "nvme_error_stat": false, 00:23:15.612 "nvme_ioq_poll_period_us": 0, 00:23:15.612 "rdma_cm_event_timeout_ms": 0, 00:23:15.612 "rdma_max_cq_size": 0, 00:23:15.612 "rdma_srq_size": 0, 00:23:15.612 "reconnect_delay_sec": 0, 00:23:15.612 "timeout_admin_us": 0, 00:23:15.612 "timeout_us": 0, 00:23:15.612 "transport_ack_timeout": 0, 00:23:15.612 "transport_retry_count": 4, 00:23:15.612 "transport_tos": 0 00:23:15.612 } 00:23:15.612 }, 00:23:15.612 { 00:23:15.612 "method": "bdev_nvme_attach_controller", 00:23:15.612 "params": { 00:23:15.612 "adrfam": "IPv4", 00:23:15.612 "ctrlr_loss_timeout_sec": 0, 00:23:15.612 "ddgst": false, 00:23:15.612 "fast_io_fail_timeout_sec": 0, 00:23:15.612 "hdgst": false, 00:23:15.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:15.612 "name": "nvme0", 00:23:15.612 "prchk_guard": false, 00:23:15.612 "prchk_reftag": false, 00:23:15.612 "psk": "key0", 00:23:15.612 "reconnect_delay_sec": 0, 00:23:15.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.612 "traddr": "127.0.0.1", 00:23:15.612 "trsvcid": "4420", 00:23:15.612 "trtype": "TCP" 00:23:15.612 } 00:23:15.612 }, 00:23:15.612 { 00:23:15.612 "method": "bdev_nvme_set_hotplug", 00:23:15.612 "params": { 00:23:15.612 "enable": false, 00:23:15.612 "period_us": 100000 00:23:15.612 } 00:23:15.612 }, 00:23:15.612 { 00:23:15.612 "method": "bdev_wait_for_examine" 00:23:15.612 } 00:23:15.612 ] 00:23:15.612 }, 00:23:15.612 { 00:23:15.612 "subsystem": "nbd", 00:23:15.612 "config": [] 00:23:15.612 } 00:23:15.612 ] 00:23:15.612 }' 00:23:15.612 [2024-07-15 18:42:38.033703] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:23:15.612 [2024-07-15 18:42:38.033798] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100099 ] 00:23:15.612 [2024-07-15 18:42:38.158278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.870 [2024-07-15 18:42:38.255165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.870 [2024-07-15 18:42:38.417143] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.437 18:42:38 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.437 18:42:38 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:16.437 18:42:38 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:16.437 18:42:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.437 18:42:38 keyring_file -- keyring/file.sh@120 -- # jq length 00:23:16.695 18:42:39 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:16.695 18:42:39 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:23:16.695 18:42:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:16.695 18:42:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:16.695 18:42:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:16.695 18:42:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:16.695 18:42:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.954 18:42:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:16.954 18:42:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:23:16.954 18:42:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:16.954 18:42:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:16.954 18:42:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:16.954 18:42:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.954 18:42:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:16.954 18:42:39 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:16.954 18:42:39 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:16.954 18:42:39 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:16.954 18:42:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:17.213 18:42:39 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:17.213 18:42:39 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:17.213 18:42:39 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.5wh6fMFYts /tmp/tmp.SRf3AMuyvT 00:23:17.213 18:42:39 keyring_file -- keyring/file.sh@20 -- # killprocess 100099 00:23:17.213 18:42:39 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100099 ']' 00:23:17.213 18:42:39 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100099 00:23:17.213 18:42:39 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:17.213 18:42:39 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:17.213 18:42:39 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100099 00:23:17.213 18:42:39 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:17.213 18:42:39 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:17.213 18:42:39 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100099' 00:23:17.213 killing process with pid 100099 00:23:17.213 18:42:39 keyring_file -- common/autotest_common.sh@967 -- # kill 100099 00:23:17.213 Received shutdown signal, test time was about 1.000000 seconds 00:23:17.213 00:23:17.213 Latency(us) 00:23:17.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.213 =================================================================================================================== 00:23:17.213 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.213 18:42:39 keyring_file -- common/autotest_common.sh@972 -- # wait 100099 00:23:17.503 18:42:39 keyring_file -- keyring/file.sh@21 -- # killprocess 99607 00:23:17.503 18:42:39 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99607 ']' 00:23:17.503 18:42:39 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99607 00:23:17.503 18:42:39 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:17.503 18:42:40 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:17.503 18:42:40 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99607 00:23:17.503 18:42:40 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:17.503 18:42:40 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:17.503 killing process with pid 99607 00:23:17.503 18:42:40 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99607' 00:23:17.503 18:42:40 keyring_file -- common/autotest_common.sh@967 -- # kill 99607 00:23:17.503 [2024-07-15 18:42:40.031999] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:17.503 18:42:40 keyring_file -- common/autotest_common.sh@972 -- # wait 99607 00:23:17.761 00:23:17.761 real 0m13.707s 00:23:17.761 user 0m32.645s 00:23:17.761 sys 0m3.648s 00:23:17.761 18:42:40 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.761 18:42:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:17.761 ************************************ 00:23:17.761 END TEST keyring_file 00:23:17.761 ************************************ 00:23:18.019 18:42:40 -- common/autotest_common.sh@1142 -- # return 0 00:23:18.019 18:42:40 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:23:18.019 18:42:40 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:18.019 18:42:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:18.019 18:42:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.019 18:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:18.019 ************************************ 00:23:18.019 START TEST keyring_linux 00:23:18.019 ************************************ 00:23:18.019 18:42:40 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:18.019 * Looking for test storage... 00:23:18.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:18.019 18:42:40 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:18.019 18:42:40 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee8aff67-4252-4979-91cf-1a72f40d57b6 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=ee8aff67-4252-4979-91cf-1a72f40d57b6 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:18.019 18:42:40 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.019 18:42:40 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.019 18:42:40 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.019 18:42:40 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.019 18:42:40 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.019 18:42:40 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.019 18:42:40 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:18.019 18:42:40 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.019 18:42:40 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.019 18:42:40 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:18.019 18:42:40 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:18.019 18:42:40 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:18.020 18:42:40 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:18.020 18:42:40 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:18.020 18:42:40 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:18.020 18:42:40 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:18.020 18:42:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:18.020 18:42:40 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:18.020 18:42:40 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:18.020 18:42:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:18.020 18:42:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:18.020 18:42:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:18.020 18:42:40 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:18.020 18:42:40 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.020 18:42:40 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:18.020 18:42:40 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:18.020 18:42:40 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:18.020 18:42:40 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:18.020 18:42:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:18.277 /tmp/:spdk-test:key0 00:23:18.277 18:42:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:18.277 18:42:40 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:18.277 18:42:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:18.277 18:42:40 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:18.277 18:42:40 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:18.277 18:42:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:18.277 18:42:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:18.277 18:42:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:18.277 18:42:40 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:18.277 18:42:40 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.277 18:42:40 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:18.277 18:42:40 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:18.277 18:42:40 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:18.277 18:42:40 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:18.277 18:42:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:18.277 /tmp/:spdk-test:key1 00:23:18.277 18:42:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:18.277 18:42:40 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100242 00:23:18.277 18:42:40 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:18.277 18:42:40 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100242 00:23:18.277 18:42:40 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100242 ']' 00:23:18.277 18:42:40 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.277 18:42:40 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.277 18:42:40 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.277 18:42:40 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.277 18:42:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:18.277 [2024-07-15 18:42:40.749919] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:23:18.277 [2024-07-15 18:42:40.750402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100242 ] 00:23:18.534 [2024-07-15 18:42:40.891966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.534 [2024-07-15 18:42:40.989881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:23:19.100 18:42:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:19.100 [2024-07-15 18:42:41.589217] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.100 null0 00:23:19.100 [2024-07-15 18:42:41.621150] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:19.100 [2024-07-15 18:42:41.621363] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.100 18:42:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:19.100 783154104 00:23:19.100 18:42:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:19.100 674839144 00:23:19.100 18:42:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100278 00:23:19.100 18:42:41 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:19.100 18:42:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100278 /var/tmp/bperf.sock 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100278 ']' 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.100 18:42:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:19.100 [2024-07-15 18:42:41.700851] Starting SPDK v24.09-pre git sha1 cd61d4ab3 / DPDK 24.03.0 initialization... 00:23:19.100 [2024-07-15 18:42:41.700918] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100278 ] 00:23:19.358 [2024-07-15 18:42:41.839652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.359 [2024-07-15 18:42:41.931861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.292 18:42:42 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.292 18:42:42 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:23:20.292 18:42:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:20.292 18:42:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:20.292 18:42:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:20.292 18:42:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:20.605 18:42:43 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:20.605 18:42:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:20.866 [2024-07-15 18:42:43.225837] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.866 nvme0n1 00:23:20.866 18:42:43 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:20.866 18:42:43 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:20.866 18:42:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:20.866 18:42:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:20.866 18:42:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:20.866 18:42:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:21.123 18:42:43 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:21.123 18:42:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:21.123 18:42:43 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@25 -- # sn=783154104 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@26 -- # [[ 783154104 == \7\8\3\1\5\4\1\0\4 ]] 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 783154104 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:21.123 18:42:43 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:21.381 Running I/O for 1 seconds... 00:23:22.314 00:23:22.314 Latency(us) 00:23:22.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.314 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:22.314 nvme0n1 : 1.01 18435.96 72.02 0.00 0.00 6914.51 3066.24 8896.05 00:23:22.314 =================================================================================================================== 00:23:22.314 Total : 18435.96 72.02 0.00 0.00 6914.51 3066.24 8896.05 00:23:22.314 0 00:23:22.314 18:42:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:22.314 18:42:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:22.572 18:42:45 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:22.572 18:42:45 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:22.572 18:42:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:22.572 18:42:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:22.572 18:42:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:22.572 18:42:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:22.829 18:42:45 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:22.829 18:42:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:22.830 18:42:45 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:22.830 18:42:45 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.830 18:42:45 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:23:22.830 18:42:45 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.830 18:42:45 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:22.830 18:42:45 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:22.830 18:42:45 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:22.830 18:42:45 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:22.830 18:42:45 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:22.830 18:42:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:23.088 [2024-07-15 18:42:45.457013] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:23.088 [2024-07-15 18:42:45.457840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb7ea0 (107): Transport endpoint is not connected 00:23:23.088 [2024-07-15 18:42:45.458828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb7ea0 (9): Bad file descriptor 00:23:23.088 [2024-07-15 18:42:45.459825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.088 [2024-07-15 18:42:45.459849] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:23.088 [2024-07-15 18:42:45.459858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.088 2024/07/15 18:42:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:23.088 request: 00:23:23.088 { 00:23:23.088 "method": "bdev_nvme_attach_controller", 00:23:23.088 "params": { 00:23:23.088 "name": "nvme0", 00:23:23.088 "trtype": "tcp", 00:23:23.088 "traddr": "127.0.0.1", 00:23:23.088 "adrfam": "ipv4", 00:23:23.088 "trsvcid": "4420", 00:23:23.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:23.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:23.088 "prchk_reftag": false, 00:23:23.088 "prchk_guard": false, 00:23:23.088 "hdgst": false, 00:23:23.088 "ddgst": false, 00:23:23.088 "psk": ":spdk-test:key1" 00:23:23.088 } 00:23:23.088 } 00:23:23.088 Got JSON-RPC error response 00:23:23.088 GoRPCClient: error on JSON-RPC call 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@33 -- # sn=783154104 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 783154104 00:23:23.088 1 links removed 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@33 -- # sn=674839144 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 674839144 00:23:23.088 1 links removed 00:23:23.088 18:42:45 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100278 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100278 ']' 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100278 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100278 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:23.088 killing process with pid 100278 00:23:23.088 Received shutdown signal, test time was about 1.000000 seconds 00:23:23.088 00:23:23.088 Latency(us) 00:23:23.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.088 =================================================================================================================== 00:23:23.088 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100278' 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@967 -- # kill 100278 00:23:23.088 18:42:45 keyring_linux -- common/autotest_common.sh@972 -- # wait 100278 00:23:23.347 18:42:45 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100242 00:23:23.347 18:42:45 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100242 ']' 00:23:23.347 18:42:45 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100242 00:23:23.347 18:42:45 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:23:23.347 18:42:45 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.347 18:42:45 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100242 00:23:23.347 18:42:45 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:23.347 18:42:45 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:23.347 killing process with pid 100242 00:23:23.347 18:42:45 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100242' 00:23:23.347 18:42:45 keyring_linux -- common/autotest_common.sh@967 -- # kill 100242 00:23:23.347 18:42:45 keyring_linux -- common/autotest_common.sh@972 -- # wait 100242 00:23:23.606 00:23:23.606 real 0m5.660s 00:23:23.606 user 0m10.349s 00:23:23.606 sys 0m1.656s 00:23:23.606 18:42:46 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:23.606 18:42:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:23.606 ************************************ 00:23:23.606 END TEST keyring_linux 00:23:23.606 ************************************ 00:23:23.606 18:42:46 -- common/autotest_common.sh@1142 -- # return 0 00:23:23.606 18:42:46 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:23:23.606 18:42:46 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:23:23.606 18:42:46 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:23:23.606 18:42:46 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:23:23.606 18:42:46 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:23:23.606 18:42:46 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:23:23.606 18:42:46 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:23:23.606 18:42:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.606 18:42:46 -- common/autotest_common.sh@10 -- # set +x 00:23:23.606 18:42:46 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:23:23.606 18:42:46 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:23:23.606 18:42:46 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:23:23.606 18:42:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.140 INFO: APP EXITING 00:23:26.140 INFO: killing all VMs 00:23:26.140 INFO: killing vhost app 00:23:26.140 INFO: EXIT DONE 00:23:26.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:26.399 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:26.666 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:27.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:27.281 Cleaning 00:23:27.281 Removing: /var/run/dpdk/spdk0/config 00:23:27.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:27.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:27.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:27.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:27.281 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:27.281 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:27.281 Removing: /var/run/dpdk/spdk1/config 00:23:27.281 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:27.281 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:27.281 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:27.281 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:27.281 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:27.281 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:27.281 Removing: /var/run/dpdk/spdk2/config 00:23:27.539 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:27.539 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:27.539 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:27.539 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:27.539 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:27.539 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:27.539 Removing: /var/run/dpdk/spdk3/config 00:23:27.539 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:27.539 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:27.539 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:27.539 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:27.539 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:27.539 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:27.539 Removing: /var/run/dpdk/spdk4/config 00:23:27.539 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:27.539 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:27.539 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:27.539 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:27.539 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:27.539 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:27.539 Removing: /dev/shm/nvmf_trace.0 00:23:27.539 Removing: /dev/shm/spdk_tgt_trace.pid60380 00:23:27.539 Removing: /var/run/dpdk/spdk0 00:23:27.539 Removing: /var/run/dpdk/spdk1 00:23:27.539 Removing: /var/run/dpdk/spdk2 00:23:27.539 Removing: /var/run/dpdk/spdk3 00:23:27.539 Removing: /var/run/dpdk/spdk4 00:23:27.539 Removing: /var/run/dpdk/spdk_pid100099 00:23:27.539 Removing: /var/run/dpdk/spdk_pid100242 00:23:27.539 Removing: /var/run/dpdk/spdk_pid100278 00:23:27.539 Removing: /var/run/dpdk/spdk_pid60235 00:23:27.539 Removing: /var/run/dpdk/spdk_pid60380 00:23:27.539 Removing: /var/run/dpdk/spdk_pid60640 00:23:27.539 Removing: /var/run/dpdk/spdk_pid60727 00:23:27.539 Removing: /var/run/dpdk/spdk_pid60766 00:23:27.539 Removing: /var/run/dpdk/spdk_pid60876 00:23:27.539 Removing: /var/run/dpdk/spdk_pid60906 00:23:27.539 Removing: /var/run/dpdk/spdk_pid61024 00:23:27.539 Removing: /var/run/dpdk/spdk_pid61288 00:23:27.539 Removing: /var/run/dpdk/spdk_pid61458 00:23:27.539 Removing: /var/run/dpdk/spdk_pid61535 00:23:27.539 Removing: /var/run/dpdk/spdk_pid61621 00:23:27.539 Removing: /var/run/dpdk/spdk_pid61711 00:23:27.539 Removing: /var/run/dpdk/spdk_pid61749 00:23:27.539 Removing: /var/run/dpdk/spdk_pid61785 00:23:27.539 Removing: /var/run/dpdk/spdk_pid61846 00:23:27.539 Removing: /var/run/dpdk/spdk_pid61964 00:23:27.539 Removing: /var/run/dpdk/spdk_pid62572 00:23:27.539 Removing: /var/run/dpdk/spdk_pid62631 00:23:27.539 Removing: /var/run/dpdk/spdk_pid62694 00:23:27.539 Removing: /var/run/dpdk/spdk_pid62722 00:23:27.539 Removing: /var/run/dpdk/spdk_pid62801 00:23:27.539 Removing: /var/run/dpdk/spdk_pid62824 00:23:27.539 Removing: /var/run/dpdk/spdk_pid62903 00:23:27.539 Removing: /var/run/dpdk/spdk_pid62931 00:23:27.539 Removing: /var/run/dpdk/spdk_pid62982 00:23:27.539 Removing: /var/run/dpdk/spdk_pid63007 00:23:27.539 Removing: /var/run/dpdk/spdk_pid63058 00:23:27.539 Removing: /var/run/dpdk/spdk_pid63083 00:23:27.539 Removing: /var/run/dpdk/spdk_pid63236 00:23:27.539 Removing: /var/run/dpdk/spdk_pid63266 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63346 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63410 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63440 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63493 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63533 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63562 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63602 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63631 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63670 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63700 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63735 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63771 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63806 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63840 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63875 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63904 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63938 00:23:27.798 Removing: /var/run/dpdk/spdk_pid63973 00:23:27.798 Removing: /var/run/dpdk/spdk_pid64007 00:23:27.798 Removing: /var/run/dpdk/spdk_pid64044 00:23:27.798 Removing: /var/run/dpdk/spdk_pid64082 00:23:27.798 Removing: /var/run/dpdk/spdk_pid64120 00:23:27.798 Removing: /var/run/dpdk/spdk_pid64153 00:23:27.798 Removing: /var/run/dpdk/spdk_pid64190 00:23:27.798 Removing: /var/run/dpdk/spdk_pid64260 00:23:27.798 Removing: /var/run/dpdk/spdk_pid64371 00:23:27.798 Removing: /var/run/dpdk/spdk_pid64780 00:23:27.798 Removing: /var/run/dpdk/spdk_pid68152 00:23:27.798 Removing: /var/run/dpdk/spdk_pid68491 00:23:27.798 Removing: /var/run/dpdk/spdk_pid70944 00:23:27.798 Removing: /var/run/dpdk/spdk_pid71317 00:23:27.798 Removing: /var/run/dpdk/spdk_pid71578 00:23:27.798 Removing: /var/run/dpdk/spdk_pid71624 00:23:27.798 Removing: /var/run/dpdk/spdk_pid72234 00:23:27.798 Removing: /var/run/dpdk/spdk_pid72661 00:23:27.798 Removing: /var/run/dpdk/spdk_pid72712 00:23:27.798 Removing: /var/run/dpdk/spdk_pid73058 00:23:27.798 Removing: /var/run/dpdk/spdk_pid73578 00:23:27.798 Removing: /var/run/dpdk/spdk_pid74012 00:23:27.798 Removing: /var/run/dpdk/spdk_pid74970 00:23:27.798 Removing: /var/run/dpdk/spdk_pid75943 00:23:27.798 Removing: /var/run/dpdk/spdk_pid76061 00:23:27.798 Removing: /var/run/dpdk/spdk_pid76129 00:23:27.798 Removing: /var/run/dpdk/spdk_pid77580 00:23:27.798 Removing: /var/run/dpdk/spdk_pid77810 00:23:27.798 Removing: /var/run/dpdk/spdk_pid82800 00:23:27.798 Removing: /var/run/dpdk/spdk_pid83237 00:23:27.798 Removing: /var/run/dpdk/spdk_pid83340 00:23:27.798 Removing: /var/run/dpdk/spdk_pid83492 00:23:27.798 Removing: /var/run/dpdk/spdk_pid83532 00:23:27.798 Removing: /var/run/dpdk/spdk_pid83572 00:23:27.798 Removing: /var/run/dpdk/spdk_pid83622 00:23:27.798 Removing: /var/run/dpdk/spdk_pid83776 00:23:27.798 Removing: /var/run/dpdk/spdk_pid83924 00:23:27.798 Removing: /var/run/dpdk/spdk_pid84183 00:23:27.798 Removing: /var/run/dpdk/spdk_pid84300 00:23:27.798 Removing: /var/run/dpdk/spdk_pid84542 00:23:27.798 Removing: /var/run/dpdk/spdk_pid84662 00:23:27.798 Removing: /var/run/dpdk/spdk_pid84792 00:23:27.798 Removing: /var/run/dpdk/spdk_pid85130 00:23:27.798 Removing: /var/run/dpdk/spdk_pid85558 00:23:27.798 Removing: /var/run/dpdk/spdk_pid85857 00:23:27.798 Removing: /var/run/dpdk/spdk_pid86357 00:23:27.798 Removing: /var/run/dpdk/spdk_pid86359 00:23:28.056 Removing: /var/run/dpdk/spdk_pid86695 00:23:28.056 Removing: /var/run/dpdk/spdk_pid86714 00:23:28.056 Removing: /var/run/dpdk/spdk_pid86730 00:23:28.056 Removing: /var/run/dpdk/spdk_pid86755 00:23:28.056 Removing: /var/run/dpdk/spdk_pid86771 00:23:28.056 Removing: /var/run/dpdk/spdk_pid87115 00:23:28.056 Removing: /var/run/dpdk/spdk_pid87164 00:23:28.056 Removing: /var/run/dpdk/spdk_pid87496 00:23:28.056 Removing: /var/run/dpdk/spdk_pid87746 00:23:28.056 Removing: /var/run/dpdk/spdk_pid88227 00:23:28.056 Removing: /var/run/dpdk/spdk_pid88810 00:23:28.056 Removing: /var/run/dpdk/spdk_pid90116 00:23:28.056 Removing: /var/run/dpdk/spdk_pid90714 00:23:28.056 Removing: /var/run/dpdk/spdk_pid90716 00:23:28.056 Removing: /var/run/dpdk/spdk_pid92635 00:23:28.056 Removing: /var/run/dpdk/spdk_pid92727 00:23:28.056 Removing: /var/run/dpdk/spdk_pid92813 00:23:28.056 Removing: /var/run/dpdk/spdk_pid92903 00:23:28.056 Removing: /var/run/dpdk/spdk_pid93059 00:23:28.056 Removing: /var/run/dpdk/spdk_pid93146 00:23:28.056 Removing: /var/run/dpdk/spdk_pid93231 00:23:28.056 Removing: /var/run/dpdk/spdk_pid93321 00:23:28.056 Removing: /var/run/dpdk/spdk_pid93662 00:23:28.056 Removing: /var/run/dpdk/spdk_pid94352 00:23:28.056 Removing: /var/run/dpdk/spdk_pid95702 00:23:28.056 Removing: /var/run/dpdk/spdk_pid95903 00:23:28.056 Removing: /var/run/dpdk/spdk_pid96188 00:23:28.056 Removing: /var/run/dpdk/spdk_pid96491 00:23:28.056 Removing: /var/run/dpdk/spdk_pid97052 00:23:28.056 Removing: /var/run/dpdk/spdk_pid97057 00:23:28.056 Removing: /var/run/dpdk/spdk_pid97420 00:23:28.056 Removing: /var/run/dpdk/spdk_pid97581 00:23:28.056 Removing: /var/run/dpdk/spdk_pid97740 00:23:28.056 Removing: /var/run/dpdk/spdk_pid97842 00:23:28.056 Removing: /var/run/dpdk/spdk_pid98001 00:23:28.056 Removing: /var/run/dpdk/spdk_pid98111 00:23:28.056 Removing: /var/run/dpdk/spdk_pid98782 00:23:28.056 Removing: /var/run/dpdk/spdk_pid98817 00:23:28.056 Removing: /var/run/dpdk/spdk_pid98858 00:23:28.056 Removing: /var/run/dpdk/spdk_pid99111 00:23:28.056 Removing: /var/run/dpdk/spdk_pid99142 00:23:28.056 Removing: /var/run/dpdk/spdk_pid99177 00:23:28.056 Removing: /var/run/dpdk/spdk_pid99607 00:23:28.056 Removing: /var/run/dpdk/spdk_pid99642 00:23:28.056 Clean 00:23:28.056 18:42:50 -- common/autotest_common.sh@1451 -- # return 0 00:23:28.056 18:42:50 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:23:28.056 18:42:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:28.056 18:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:28.314 18:42:50 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:23:28.314 18:42:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:28.314 18:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:28.314 18:42:50 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:28.314 18:42:50 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:28.314 18:42:50 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:28.314 18:42:50 -- spdk/autotest.sh@391 -- # hash lcov 00:23:28.314 18:42:50 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:28.314 18:42:50 -- spdk/autotest.sh@393 -- # hostname 00:23:28.314 18:42:50 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:28.571 geninfo: WARNING: invalid characters removed from testname! 00:23:55.112 18:43:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:55.112 18:43:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:57.643 18:43:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:59.562 18:43:21 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:01.465 18:43:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:03.993 18:43:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:05.919 18:43:28 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:05.919 18:43:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.919 18:43:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:05.919 18:43:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.919 18:43:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.919 18:43:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.919 18:43:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.919 18:43:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.919 18:43:28 -- paths/export.sh@5 -- $ export PATH 00:24:05.919 18:43:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.919 18:43:28 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:24:05.919 18:43:28 -- common/autobuild_common.sh@444 -- $ date +%s 00:24:05.919 18:43:28 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721069008.XXXXXX 00:24:05.919 18:43:28 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721069008.qmQ5LG 00:24:05.919 18:43:28 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:24:05.919 18:43:28 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:24:05.919 18:43:28 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:24:05.919 18:43:28 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:24:05.919 18:43:28 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:24:05.919 18:43:28 -- common/autobuild_common.sh@460 -- $ get_config_params 00:24:05.919 18:43:28 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:24:05.919 18:43:28 -- common/autotest_common.sh@10 -- $ set +x 00:24:05.919 18:43:28 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:24:05.919 18:43:28 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:24:05.919 18:43:28 -- pm/common@17 -- $ local monitor 00:24:05.919 18:43:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:05.919 18:43:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:05.919 18:43:28 -- pm/common@25 -- $ sleep 1 00:24:05.919 18:43:28 -- pm/common@21 -- $ date +%s 00:24:05.919 18:43:28 -- pm/common@21 -- $ date +%s 00:24:05.919 18:43:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721069008 00:24:05.919 18:43:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721069008 00:24:05.919 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721069008_collect-vmstat.pm.log 00:24:05.919 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721069008_collect-cpu-load.pm.log 00:24:06.850 18:43:29 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:24:06.850 18:43:29 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:24:06.850 18:43:29 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:24:06.850 18:43:29 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:06.850 18:43:29 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:06.850 18:43:29 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:06.850 18:43:29 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:06.850 18:43:29 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:06.850 18:43:29 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:06.850 18:43:29 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:06.850 18:43:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:06.850 18:43:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:06.850 18:43:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:06.850 18:43:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:06.850 18:43:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:24:06.850 18:43:29 -- pm/common@44 -- $ pid=102000 00:24:06.850 18:43:29 -- pm/common@50 -- $ kill -TERM 102000 00:24:06.850 18:43:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:06.850 18:43:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:06.850 18:43:29 -- pm/common@44 -- $ pid=102002 00:24:06.850 18:43:29 -- pm/common@50 -- $ kill -TERM 102002 00:24:06.850 + [[ -n 5107 ]] 00:24:06.850 + sudo kill 5107 00:24:07.115 [Pipeline] } 00:24:07.135 [Pipeline] // timeout 00:24:07.141 [Pipeline] } 00:24:07.159 [Pipeline] // stage 00:24:07.165 [Pipeline] } 00:24:07.182 [Pipeline] // catchError 00:24:07.190 [Pipeline] stage 00:24:07.192 [Pipeline] { (Stop VM) 00:24:07.206 [Pipeline] sh 00:24:07.486 + vagrant halt 00:24:10.768 ==> default: Halting domain... 00:24:17.331 [Pipeline] sh 00:24:17.608 + vagrant destroy -f 00:24:20.916 ==> default: Removing domain... 00:24:20.928 [Pipeline] sh 00:24:21.204 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:24:21.217 [Pipeline] } 00:24:21.238 [Pipeline] // stage 00:24:21.245 [Pipeline] } 00:24:21.266 [Pipeline] // dir 00:24:21.270 [Pipeline] } 00:24:21.308 [Pipeline] // wrap 00:24:21.315 [Pipeline] } 00:24:21.327 [Pipeline] // catchError 00:24:21.335 [Pipeline] stage 00:24:21.337 [Pipeline] { (Epilogue) 00:24:21.348 [Pipeline] sh 00:24:21.627 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:26.900 [Pipeline] catchError 00:24:26.902 [Pipeline] { 00:24:26.917 [Pipeline] sh 00:24:27.194 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:27.194 Artifacts sizes are good 00:24:27.459 [Pipeline] } 00:24:27.476 [Pipeline] // catchError 00:24:27.486 [Pipeline] archiveArtifacts 00:24:27.492 Archiving artifacts 00:24:27.658 [Pipeline] cleanWs 00:24:27.668 [WS-CLEANUP] Deleting project workspace... 00:24:27.668 [WS-CLEANUP] Deferred wipeout is used... 00:24:27.674 [WS-CLEANUP] done 00:24:27.677 [Pipeline] } 00:24:27.694 [Pipeline] // stage 00:24:27.703 [Pipeline] } 00:24:27.721 [Pipeline] // node 00:24:27.727 [Pipeline] End of Pipeline 00:24:27.758 Finished: SUCCESS